Elastic Beam Launches Machine Learning-Based Solution For Securing APIs

To date, API security has been one of the most vexing issues for API providers. ProgrammableWeb has chronicled the details of many breaches, most of which could have been avoided had it not been for a human oversight of one sort or another (or in some cases, just plain old poor judgement). To put some perspective on the issue (and how difficult API security is), pretty much every major Internet company has had an API security problem. Some have resulted in alarming breaches. Others appear to have been caught before any damage was done. But the moral of the story should not be lost: If companies like Google, Facebook, Pinterest, and many others that can afford the best talent still wrestle with API security, then there isn’t much hope for the rest of us.

The problem? There’s no easy button.

API product managers would love to have a checkbox on an API dashboard somewhere that, when clicked, would activate some form of infallible Fort Knox-like API security. Yes, API management solutions are constantly improving with respect to the API security that’s baked in. But, as is often said in the digital security world, it’s a jungle out there. Given the evolving range of threats, it may not be until an equally autonomic API security solution hits that scene that those entrusted with API security can get any sleep at night.

Enter Elastic Beam.

After three years, the API security company is emerging from stealth mode with a machine learning-based technology that co-founder and CEO Bernard Harguindeguy claims to be game changing in its approach to endpoint protection. 

In his conversation with ProgrammableWeb, Harguindeguy claims that the company’s solutions — which include API Security Enforcer — are unlike other security solutions in the way they are based on patterns. If you think of traditional security solutions like antivirus that look for patterns of intrusion, what Harguindeguy says is true. Elastic Beam’s solutions are not quietly running in the background waiting to pounce on some recognized pattern of intrusion. Rather, the artificial intelligence inside is actually doing the opposite. It is constantly watching for a non-pattern, only pouncing when the unexpected happens.

The approach is particularly well-suited to securing API endpoints because of a feature of APIs called the API contract (to better understand the idea of an API contract through real world metaphors like the electrical socket in the wall, Lego, or intermodal shipping containers, be sure to watch Part 2 of ProgrammableWeb’s API 101 Video Series). Like electrical sockets, Legos, and intermodal shipping containers, APIs involve a specific technical agreement between the consumer and the provider sides.  If there’s any diversion from that contract in a developer’s code or in what the provider offers at the API endpoint, relevant API requests will likely fail. 

A strict technical understanding like an API contract essentially means that almost all legitimate traffic is going to conform to a specific pattern and any traffic that doesn’t is, well, suspicious. 

Theoretically, you could argue that if an API request is “out of contract,” then the API simply wouldn’t respond. Or, it might respond with an error code and some dashboard could light up with an indication of a potential intrusion. But that “theory” harbors many nuances that make intrusion detection more of a black art. For example, clever developers often find ways to trigger legitimate API responses from requests that the API designer may not have originally intended. 

Back in April 2015, Google paid a bug bounty to a researcher who discovered a flaw in YouTube’s API that would have allowed an attacker to delete all of the videos on the service. While such an API call might have technically conformed to the API contract for YouTube, the structure of the call was more than likely very atypical of the API calls envisioned by the API’s designer. 

Accurately dealing with such nuances is where the machine learning and artificial intelligence aspects of Elastic Beam’s technology come into play. For it to work correctly, the technology has to learn to recognize the legitimate traffic. Once it knows what all the legitimate traffic is supposed to look like, then it has a better idea of what could be illegitimate. According to Harguindeguy, the training process would typically happen while an API is being developed. As the API is being tested, Elastic Beam’s technology is essentially studying all of the inbound and outbound test traffic, building a keen understanding of what belongs, what’s an accident (ie: just a badly formed call by a developer), and what’s a threat. Then, as the API is put into production, so too is Elastic Beam and its machine-learned understanding.

One important aspect of Elastic Beam is that, in its current form, it is technically an on-premises solution. In other words, so long as you have some control over the network and the addressability of your API’s endpoint, you can also put Elastic Beam’s solution in position to watch over your API traffic. Such an arrangement is possible when you have your own datacenter or when you’re using a cloud service like Amazon for its IaaS capabilities (where you have enough “physical” control over your servers as though they were your own).  

The technology can be configured in one of two ways. In the in-band configuration, it acts like a proxy and API traffic must pass through it for inspection before being based on to the actual API endpoint. Harguindeguy claims that the technology does not introduce any meaningful latency to API traffic in this configuration. In the out-of-band configuration, Elastic Beam interrogates the existing API gateway for a copy of in and out-bound API traffic. This second, out-of-band configuration depends on Elastic Beam’s ability to integrate with today’s existing API gateways. No such arrangements exist today. But, according to Harguindeguy, two integrations are on the verge of being announced. Unfortunately, at the the time of the interview, he was not at liberty to disclose which ones.

Finally, once Elastic Beam is in place, it keeps learning much the same way that other machine learning algorithms never stop. For example, the machine learning behind Google Photos keeps getting better at identifying an image as a human face every time another image of a human face is added to the service. Similarly, the more legitimate traffic that comes through Elastic Beam's API security solutions, the more it learns. Although it's not a feature of the product just yet, 

Harguindeguy says that customers will eventually be able to opt-in to a service whereby the machine learnings can be shared across installations, thereby making all customers' installations even smarter about what's legit, and whats not. 

David Berlind is the editor-in-chief of ProgrammableWeb.com. You can reach him at david.berlind@programmableweb.com. Connect to David on Twitter at @dberlind or on LinkedIn, put him in a Google+ circle, or friend him on Facebook.
 

Comments