API Businesses Don't Deserve to Exist Unless They Aggregate

Growing up, I studied piano. By high school, I was bored with reciting Scott Joplin and the Casio synthesizer I had access to didn’t pull my heartstrings. But then it happened....I remember the first time I played the organ in a church. After configuring the interface (known as stops on an organ), I pressed three keys which encoded my press into an electrical signal which was sent to the other end of the cathedral to some actuators that controlled airflow to these giant pipes and WOW.
 
Little me could make a BIG sound.
 
To me, this is the fundamental notion of an awesome API: to control vast resources with simple programmatic commands:

  1. Look at 1,000 images and select the ones with babies in them? There’s an API for that.
  2. Simulate 1,000 stock market scenarios and tell me if my trade has a chance of letting me retire? There’s an API for that.
  3. Find out everything including the credit score of my high school teacher inputting only their email address? There’s an API for that.
  4. Send money through five banks to someone’s account on the other side of the world? There’s an API for that.
  5. Beam propaganda to all the friends of my opponents? Sadly, there’s an API for that too.

As developers have become the architects of the human experience and their craft exposed to the masses, the opportunity to offer those developers switches that operate vast resources in the form of APIs has increased. Everybody gets an organ keyboard and the other end is connected to elaborate and vast pipes throughout the world.
 
Before joining Shasta, I was the CTO of API Economy at IBM and this was our vision: to empower businesses to expose their functions as APIs that others can mix, match and build upon - setting up all kinds of pipes in the Worldwide Wurlitzer. Before this, at StrongLoop, we maintained very popular frameworks that developers use to create these APIs including Node.js, Express and LoopBack.

Here’s what I learned: Startup API businesses don’t deserve to exist unless they aggregate. Think about this like a developer: what is the point of adding an abstraction layer? Either it needs to hide a lot of complexity (such as the famous Stripe and Twilio APIs which hide telcom and banking complexity) or it needs to aggregate. For startups, they typically don’t have any complex infrastructure to abstract - that’s what big company APIs can do. Startups can gain an advantage by aggregating many functions, or the same function from many sources. Or aggregating datasets or uses of data. But wait you say, Twilio and Stripe were once startups! Yes, and I’d say what they did was aggregate access to the infrastructure (Twilio made it easy to send SMS across many mobile networks, it doesn’t operate the networks. Stripe made it easy to charge credit cards through many banks, they didn’t operate a bank.) As many entrepreneurs have figured this out, the opportunities to aggregate across big players are highly contested. But there is a new area to gain advantage: Data, and especially data with machine learning.

Training and running ML models is expensive. It takes thousands to millions of hours of GPU time to train a great model, shuffling data to and fro all the while. It takes huge amounts of annotation to create the training set. If the model can be used across companies, or better yet, across industries, it makes sense to do this once and then optimize the runtime and offer this model (behind an API) for far less than it would cost anyone to create it themselves, and spread the cost among customers. Indeed, this type of aggregation of expense and distribution of cost is the fundamental idea of a software vendor: create a system that’s too expensive for any one user to have created and distribute the cost among many people who need the same functionality. 

Easy ML APIs have been created already – you can process text, classify objects in photographs, and translate from one language to another. It gets really interesting when a model trained using private data can be aggregated by a vendor and distributed to all who may benefit from this. Imagine a model which can predict how long a bug in Python code will take to fix – it would need to be trained on all the bug reports and all the resulting code changes that we can get our hands on. This model will get better the more code and bug reports it sees, but nobody wants to make their code and bug reports public (outside of open source projects, of course.) A trusted vendor can be given access to this data, train a model, and allow that model to be used by customers. That’s the type of API future we’re going to see and it will be awesome.

Be sure to read the next API Strategy article: Oracle v. Google Isn't That Complicated, The Supreme Court Is