How Predictive APIs Simplify Machine Learning

App developers are always looking for ways to make the lives of their users easier and for ways to introduce innovative features that help users save time. For this reason, Machine Learning ( ML) has been increasingly popular in app development. Classical examples include spam filtering, priority filtering, smart tagging, and product recommendations. Some people estimate that Machine Learning is now being used in more than half of a typical smartphone’s apps. Because of the new functionality gained by these apps, we can talk of “predictive apps,” a term coined by Forrester Research which refers to “apps that provide the right functionality and content at the right time, for the right person, by continuously learning about them and predicting what they’ll need.” 

Predictive APIs, such as the ones provided by Amazon Machine Learning, BigML, Google Prediction API, and PredicSis, are the easiest way for developers to get started with Machine Learning and to add predictive features to their apps — all you need is data to Feed to the API.

Machine Learning Predictive APIs

In spam filtering for instance, the data you’d use would consist of examples of messages/comments/posts (the inputs of the ML problem) along with their corresponding classes (the outputs: “spam” or “ham”). This data is analyzed to create a model of the relationships between inputs and outputs, so that when a new input is given (new message), a prediction of the output (spamminess) can be made, thanks to the model.

High-level APIs to get started fast

It should be noted that predictive APIs are also called Machine Learning APIs, but because the focus is on building predictive apps and not on the Machine Learning techniques, Predictive APIs is a better term to use.

Predictive APIs abstract away the technical complexities of using ML as they offer a high degree of automation in the creation and the deployment of predictive models. Essentially, predictive APIs have two core methods that look like the following:

  • model = create_model(dataset)
  • predicted_output = create_prediction(model, new_input)

Looking at the profiles of the 2 methods above, it is quite clear that the quality of a model, and therefore the quality of its predictions, depends on the quality of the data which was fed to create_model. Fortunately, data preparation is something that developers can handle and it doesn’t require knowledge of ML algorithms. However, a basic understanding of the general principles of ML, its possibilities and its limitations, will provide some guidance.

It’s important to keep in mind that there are 2 phases in ML, usually called train and predict, which correspond to the create_model and create_prediction methods mentioned above. In a previous post on ProgrammableWeb, we saw that there are different types of predictive APIs out there, and more specifically that some of them only expose models that have already been trained/learnt from data — for instance sentiment analysis APIs. In that case, there is only a create_prediction-type method. So when you hear about a new predictive API, the first thing to figure out is whether you’ll be using someone else’s model or your own (created from your own data).

Need more control?

Microsoft, with Azure ML, has the same vision of making it easier to deploy ML in real-world applications, but it is taking a different approach. In Azure’s “ML Studio,” you get to pick the ML algorithm you want to use (which assumes that you know your way around a Library of algorithms and that you know how to set their parameters) and you get to define the whole workflow from reading raw data to preparing the data that will be used to create your model. This is done via a graphical canvas where data processing tasks are represented as blocks that are connected to one another.
Azure ML canvas
Once you’ve experimented with a few options and settled on a final workflow, you can productionalize your trained model with a single click and turn it into an API that will serve predictions against the model. Thanks to the Azure Platform, the API will scale automatically. Azure also allows you to retrain your model with new data and the same workflow, via its own API.

Defining this workflow can seem like a daunting task, but fortunately there are templates on the Azure marketplace to help you get started. Each template corresponds to a particular type of problem, for instance, there is a template for recommendations that you can use on e-commerce websites or more generally, to recommend items to users.

If you’re more of an open source person, software like PredictionIO and Seldon aim at simplifying deployment of your own predictive API on your infrastructure. PredictionIO also has a gallery of templates similar to the Azure one, which makes adoption easier. And because it’s open source, you have control over the whole workflow from raw data to API. In particular, you can customize the ML algorithm — so you (or a data scientist) can later try to improve the quality of predictions — and you can change evaluation metrics to something that’s adapted to your domain of application.

Obviously, one notable difference between a cloud-based solution and an open source one is having to manage an infrastructure: you’ll have to install the server somewhere, along with data storage and processing, and you’ll have to deal with Scaling. The good news is that Amazon Machine Images are available for PredictionIO and Seldon, and a CloudFormation template is available to help you scale PredictionIO on Amazon Web Services (AWS).

Towards predictive API standards?

The idea with predictive APIs is to make it easy for developers to add predictive features to their apps. Typically, a CTO would choose a provider, and would mobilize existing resources in his team allowing them to set things up with the API, and to write data preparation scripts. App developers would then be responsible for:

  • tracking events and collecting usage data in the app (to be used for later updates of predictive models)
  • querying predictions and integrating them in the app

Ideally, their code should stay the same whether the API is at Amazon or Google or self-hosted, and the only thing that would change would be a URI in a config file. We’re not there yet, but there’s hope with the PSI project led by James Montgomery (University of Tasmania), Mark Reid (NICTA), and Barry Drake (Canon Information Systems Research Australia), which aims at standardizing the world of Machine Learning APIs. PSI is a service architecture and specification for presenting learning algorithms and data as RESTful web resources that are accessible via a common but flexible and extensible interface.

Be sure to read the next Predictions article: How to Leverage Machine Learning via Predictive APIs