This is the conclusion of ProgrammableWeb’s series on Understanding the Realities of API Security. It is based on the testimony offered by ProgrammableWeb’s editor-in-chief David Berlind to the ONC’s API Security and Privacy Task Force. In the previous part -- Part 9 -- Berlind answers the following question posed by the ONC: Could Third Party Certification Authorities Play a Role In API Security?
When we were asked to testify before the ONC’s API Security and Privacy Task Force, I and the other panelists were instructed to come prepared with five minutes of oral testimony but were permitted to submit additional written testimony. The majority of this series was derived from my lengthier written testimony.
There are several key points about API security to keep in mind.
API security is much easier said than done. API vulnerabilities can be divided into two primary categories:
- API vulnerabilities due to imperfect or outdated Internet, Web, and API security specifications
- API vulnerabilities due to human oversight. This includes everything from ignoring certain security best practices to poorly designed APIs that enable developers with unintended functionality
The majority of real world API exploits that we at ProgrammableWeb have observed over the last two years fall under the “human oversight” category. But many attacks involve some of both. For example, if Twitter issues security tokens to applications and services that want to act on behalf of its users, the failure to encrypt those tokens in transit or at rest is a human oversight. However, if after being stolen in an unencrypted state, any third party could use those tokens to freely impersonate the Twitter users they belonged to, that would suggest a weakness in the security specifications that secure API consumption (note: this was a problem that was very recently solved by a new IETF specification).
Taken together, both the human and technical challenges are making it possible for even the biggest Internet companies to publish improperly secured APIs. This does not mean that the efficacy of APIs is outweighed by their risks. But it evidences the degree to which the entire API community can never rest on its laurels and how it must work together to insure the fastest possible sharing and adoption of the latest best practices and technologies for protecting APIs.
What follows is a copy of my five minute testimony which, in essence, is a summary of my written submission. I’ve chosen to use the summary as the conclusion to this series.
For more than two years, ever since an API-related attack impacted thousands of Twitter and Facebook users, I have been researching API security from a real-world perspective. Every time there is news of some major exploit (such as a major retailer getting compromised), I go through this list of questions:
- Was an API involved? If so,
- What was the final objective of the hackers?
- What role did an API play in achieving that objective?
- Did the API provider leave its guard down or did the hackers rely on a new or unaddressed vulnerability in standard API technology, or some combination thereof?
- What must be done to prevent it from happening again?
My answers to your questions are informed by these two-years of research. A significantly more detailed version of this testimony has been filed with the ONC.
ProgrammableWeb does not currently offer an API. Rather, ProgrammableWeb maintains the largest independently-run directory of over 14,500 APIs but there are many more we don’t know about.
ProgrammableWeb also publishes articles for API practitioners. Among them, various detailed accounts of API security exploits.
Many of the API providers we track offer publicly viewable documentation. It is considered a best practice to offer such documentation as a part of developer and partner recruiting efforts.
Even when an API provider doesn’t offer official documentation for its API(s), a third party might publish unofficial documentation. A recent example of this involved the APIs for remotely accessing a Tesla automobile.
When an API provider is looking to attract as many developers as possible, it usually does not concern itself with who can and who can’t get access to its API(s). In partner-oriented programs, the API provider usually knows exactly who has access to its APIs and for what reasons. Netflix is an example of such a program.
Developers are sometimes required to bear certain certifications to use an API. For example, PayPal’s API terms of service say that API users must comply with the Payment Card Industry Data Security Standard (PCI DSS) and Payment Application Data Security Standard (PA DSS) and that documentation evidencing this compliance must be provided upon request.
Twitter and other providers have similar terms about circumventing rate limits, a common defense against brute force attacks
In 2015, private photos belonging to several celebrities including Jennifer Lawrence were shared on the Internet after hackers allegedly penetrated a non-rate limited Apple API with a brute force attack. The hackers even published the source code they used to perpetrate their attack.
Terms associated with PCI compliance or rate limiting are just two very small examples of such restrictive terms.
While thousands of organizations are racing to join the API gold rush, very few of them fully appreciate the difficulty in securing APIs. The belief or advice that if you rely on well-known Internet, Web, and API security standards and best practices to provision your API, then your API will be secure has not borne out to be true.
Since 2014, many of the biggest Internet companies on the planet have either fallen prey to, or discovered a major API vulnerability. This includes Google, Apple, Facebook, Pinterest, and Snapchat. If the companies with the deepest pockets to employ the best experts are experiencing challenges in securing their APIs, how can lesser-resourced organizations be expected to successfully do the same?
When mobile applications are in use -- which involves a great many API cases -- the majority of the API secrets that are shared between the mobile applications and the APIs they call are easily discoverable, even when standard security technologies like HTTPS/TLS are thought to have secured those secrets. Certificate pinning, mentioned earlier by Google’s Stephan Somoygi, secures this vulnerability, but very inelegantly so.
Another major issue: The most advanced solutions for running APIs -- home-grown or canned -- are sometimes out of step with the most freshly baked API security standards -- for example those from the IETF. Two key suggestions of mine are as follows:
1. The maintenance of a centrally distributed, constantly evolving checklist for not just securing APIs, but their adjacencies as well. This can inform key stakeholders on how to maintain the best possible API security taking into account the very latest exploits.
The same checklist could serve as the audit basis of some sort “Good Housekeeping Seal of Approval.” In researching the majority of the real-world API attacks that have taken place over the last two years, I have begun to formulate such a checklist.
2. Something must be done to ensure that white-hat activity does not end in criminal prosecution but rather is encouraged through bug bounty programs.
There are a great many ways and known best practices for securing APIs --- far too many to enumerate in the allotted 5 minutes.
One important question: How do you instill confidence in consumers that their applications are safe to use? It is this very question that I asked myself and that provoked me to consider the idea of a Good Housekeeping seal of approval and all the elements that would make such a program successful. They are too long and detailed to cover as a part of this testimony.
Finally, I don’t think there are third party certifying authorities that can be leveraged. But there are examples to learn from like TrustE, NIST’s Green Button initiative, and the PCI Security Standards Council.