Addressing The API Consumption Problem

APIs are a key feature of modern distributed apps. So much so that their proliferation has been dubbed "the API economy" due to the ways in which they are transforming how companies are built and operated, and how employees are getting work done.

However, API strategies in enterprises have focused on the modernization of legacy apps and enabling access to data from web and mobile app UIs. APIs for apps thus have become an adjunct to the UI they are accessed from and are limited in scope to exposing data for a specific application to external users. SaaS applications have also contributed to the rapid growth of APIs for apps. Today, the average enterprise uses 1,935 apps (a 15% increase from 2017) and each of these apps is accessible with dozens of APIs.

Creating new products and better customer journeys often requires interoperation between multiple internal applications, as well as the embedding of Artificial Intelligence (AI) technologies, learning systems, and other 3rd party services. But many APIs are developed with one application in mind, and are UI-centric: they are not optimized for the use case of connecting multiple different applications, frequently without user interaction. Delivering an effective business automation solution using such APIs can entail considerable development time and costs, maintenance overhead, and reduced reusability. But most importantly, it reduces business agility to stay competitive.

Taking an API-first approach with automation in mind

As evident from the rise of consumer-facing businesses like Uber, Airbnb, and Pinterest, as well as B2B business like Stripe,Twilio, Algolia etc., digital businesses need an API-first approach and a new category of APIs—APIs for automation that provide them the agility to improve time-to-market, reduce cost of change, and solve for a diverse set of use cases both now and in the future.

And more automated processes at scale can lead to significant cost savings across lines-of-business, according to a recent report from Capgemini Research Institute. Organizations can realize savings of anywhere from 7% in sales and marketing to 13% in finance and accounting, for example.

Automation-focused APIs at the application level can be readily used to construct high-level business workflows that span applications. Examples of such workflows include lead management, order-to-cash, pre-hire to hire, incident management, case management, invoice processing, and partner/customer onboarding. For any of these use cases, the business process API will need to comprise the underlying APIs for multiple business apps and a set of business rules.

As an example for lead management, the business process automation API will consume the APIs from a marketing app to get the raw lead data, enrich the lead with firmographic data using third-party services, cleanse it by referencing CRM data, and route it to the specific sales account executive in the sales app.

The business process API would enable the consumption of a business workflow based on a trigger event or from an application or bot/voice UI. Depending on how and where applications are accessed and invoked, the API will need to support multiple interaction models:

  • Request/Response interaction to support a small number of records
  • Real-time asynchronous notifications
  • Real-time bidirectional communication
  • Scheduled and periodic events: driven by a clock or a timer, on a user-specified schedule or interval
  • Retrieval or load of large volumes of data, possibly in batches

Unlike APIs designed to support the UI for a single application, which are well-defined and less frequently updated, automation-centric APIs are designed to learn and improve—hence they are updated more frequently.

Scale requires democratization

Developing APIs for apps requires heavyweight developer skills and tools to analyze underlying data structures; manage authentication, security, performance, uptime, and availability; monetization models; documentation; discovery; quality of service; versioning; and more. These are limited in number and are typically built by a small team of experts in API development.

On the other hand, APIs for automation can easily run into the thousands in a mid- to large-sized enterprise. The rapid development of APIs for automation at scale using expensive centralized IT resources and tools is unsustainable and often leads to large scale IT project backlogs.

Developing and consuming APIs at scale requires a democratized model where a larger number of roles can contribute to the API catalog, following the taxonomy and standards set by a central governing team.

Businesses will need to adopt technologies that drive enablement of a broader spectrum of roles, including business analysts, application admins, data scientists, sales ops, marketing ops, finops, and other business-savvy functional roles to build APIs for automation.

Four new requirements and considerations for API development

Digital transformation is successful when APIs are developed with a consumption-first mindset. But what are the new requirements and considerations that API developers and product managers must consider when designing APIs with consumption in mind?

There are four specific features APIs need to better address when designed for consumption.

1. Support for Timelines

Time should be a supported sort criterion within an API. Many key use cases involve retrieval of records in chronological order. This implies records have associated timestamps (preferably high-resolution), including ideally at least a creation time and a modification time.

A set of chronologically sorted records is a timeline. Timelines should be sortable in chronological order. Timelines should be filterable. In particular, it should be possible to specify a beginning timestamp and to fetch records whose timestamp is equal to or newer than a specified value. (Specifying an end timestamp is good, too).

Timelines should be paginated. Usually, this is done by allowing a page size and page number to be specified in the API. The page size sets a limit on the number of records returned in one call, and incrementing the page number allows fetching the next set of records.

An example of an API call supporting ordering, filtering, and pagination might be:

https://www.acme.com/api/v2/tickets?offset=2018-09-11T09:30:00Z&sort_by=created_at&order=asc&page=2&page_size=2

Timelines should contain records with unique identifiers. This is because many systems can allow the creation of multiple records with the same timestamp. Therefore, a timestamp should not be used as a record identifier (primary key); instead, each record should have a separate unique identifier. Further, using an ISO 8601 compliant text format—including timezone and millisecond fields (or if not using timezone, use UTC consistently)—is the best bet for engineers.

2. Support asynchronous interactions

High-performance APIs typically do not rely solely on a request-response interaction pattern but support some kind of asynchronous interaction. A typical asynchronous API interaction is where a transactional event such as a change in a stock price triggers an immediate update to a client. Webhooks are an example of this pattern. While not completely standardized, these are a common adjunct to REST APIs and can increase performance and ease of use. There are other pub/sub technologies that may be appropriate: WebSockets, GraphQL subscriptions, etc. The following discussion though concentrates on webhooks.

Webhooks involve a client passing a callback URL to an API on the "server-side" (a phrase we're using somewhat loosely here). In other words, whereas the client originally may have called an API endpoint on some server, a Webhook reverses the roles and the client transforms into an API endpoint that the server now calls, usually with timely updates regarding a topic that the client has subscribed to (like stock ticker updates). When a relevant event occurs in the application, the application makes a call to the specified callback URL. This is much more efficient than having the API client poll for new data. Network traffic is only being generated when new data is actually available.

API calls for webhook management might look like this:

register_webhook(
 event: "new_incident",
 url: "https://www.acme.com/new_incident",
 secret: "2e9843098as"
)

list_webhooks(event: "new_incident")

delete_webhook(id: "2aggs77a")

A webhook is created by the registration function. The secret field is passed to the URL as part of the payload when the webhook is called; this enables the client to associate the callback with a specific registration.

A "fat" webhook payload is better than a "skinny" one. If you are going to deliver data in the webhook, you might as well deliver everything you think the client might potentially need to avoid having the client make additional API calls to fetch missing data.

It's fairly standard, but webhooks should rely on an HTTP 200 response for confirmation of successful delivery. The client shouldn't have to deliver back anything except a correct HTTP response code.

Retry of notifications is another important feature. Webhook notifications can fail to be delivered, and this failure may be transient. When the HTTP response code, if any, is received, it can indicate whether or not retry should be attempted. For example, the Google Drive API will retry, with exponential backoff, if the receiving application returns 500, 502, 503, or 504 as a response code.

3. Automation Friendly Data Models

APIs establish a contract between the target application and its clients. Fairly often, the API is there primarily to support a particular use case—a common one is driving the application from a UI—and the client and application may be developed by the same organization. In that situation, it is easy to assume implicit knowledge about the application on the part of the client that is calling it. The client developer will know what data results to expect, for example, and how to format or display them.

But ideally, APIs should be multi-purpose. In particular, they should readily support "headless" interactions in which the application is not driven by a UI, but is part of a process flow that automates a business process by connecting it to other applications. APIs should also be usable by a wide range of clients whose developers may lack special knowledge about the application.

All of this points to a need to provide introspection capabilities. Introspection means that it is possible to query the API, or obtain a parallel tightly coupled description of it, in order to discover properties about the API itself. This includes the callable methods that it exposes; attributes (metadata) about the objects that it handles, including their contents and the datatypes of those contents; required and optional fields; and possible error codes and return values, among other things.

One way to support this is through a standard API description framework such as Open API or OData for REST APIs. Graph QL also provides a queryable schema system. Some vendors have a non-standard but still adequate introspection API: for example, the WordPress Metadata API.

A special problem arises when application objects can contain custom fields. These may vary between versions and/or customer-specific installations of the application. While less common, it is also possible for customized APIs to contain additional or specialized method calls.

If this is the case, then the introspection API or description format needs to be dynamic, so that it can provide the specific description applicable to the application instance and version the client is accessing. OData, for example, can do this. Hypermedia for REST is another way to provide possibly dynamic and version-specific information, especially about possible method calls.

It is important for clients to be able to recognize and adapt to objects that may have variable numbers of fields.

When custom fields are supported, field names for these should be human-readable, not just IDs or acronyms. This facilitates display and also helps support artificial intelligent-type capabilities like auto-mapping, in which similar fields across application objects can possibly be used for linking or mapping.

4. Handling Bulk Data

As mentioned earlier, bulk or batch data retrieval and load is one of the important transactional patterns for automation. Some APIs do not support these use cases well or may fail to support them for all relevant use cases.

One important capability is bulk upsert. An upsert is an atomic "insert if not present" operation. Frequently the first use case of bulk transfer that comes to mind is an initial transfer from one application to another. But once this occurs, end users are going to want to make additional incremental transfers, too. When this is done, APIs need to be able to insert only new objects— in other words, skip objects that already exist in the target application, and which are not replacement candidates (due to new or updated information for example).

Fetching very large datasets in one operation can adversely affect application performance, saturate network resources, and increase the chance of a failure condition requiring a retry. For these reasons, bulk fetch operations should be able to be done in batches. Commonly this is done via a pagination capability, acquiring objects in fixed-size "pages."

Another useful capability is "submit and wait" semantics. Sometimes on the receiving end, processing a bulk data set may be a costly operation that may take some time. So rather than wait for the completion of that operation and getting a confirmation back in one API call, it may be better for the sending application to be able to make the bulk insert request and then later get confirmation or notification that it has completed.

In any kind of ETL process, detecting and handling errors is an important part of the overall process. One aspect of this is that the receiving application should provide detailed, row-level information on any part of the data set that was not successfully processed, including the reasons for the error and whether or not it is retryable. This will enable intelligent error reporting and retry on the part of the API client.

Technology has moved incredibly quickly in the past several years. It's not uncommon to joke that "legacy" now means technology from only 1-2 years ago instead of 20-30. While application delivery and deployment has evolved considerably in the last decade toward a distributed architecture and continuous delivery approach, API engineering has not kept up.

By shifting mindsets to incorporate a multitude of consumption patterns, API engineers can better position their application to allow for the speed and promises of digital transformation.

Be sure to read the next API Design article: Why SDL First Might Win the Battle of GraphQL Design Methodologies

 

Comments (0)