How to turn your predictive models into APIs Using Domino

Training a Predictive Model (Basic)

This dataset is a typical test case for many classification techniques in machine learning. For example, we can use an algorithm called random forest and train a classifier in R:

## Loading the random forest package in R
library(randomForest)

## Training a random forest classifer in R
## Note: the iris dataset is available in R by default. 
## To find out more, try `head(iris)` and `summary(iris)`
model <- randomForest(Species ~., data = iris)

## Save the model for API Endpoint
save(model, file = "iris_rf_model.rda")

The code above is the bare minimum for training and saving a random forest model for Iris in R. Once the script has been uploaded to your Domino project, you can start the run from Domino’s web interface. Simply open your Domino project, browse to Files section, and then click on the Run button next to the R script.
Domino train model
After the run, the Files section will show a new iris_rf_model.rda file: this file contains the model we have trained. In the next step, we will expose this model as an API.
Domino trained model

Deploying the Model as REST API

In order to deploy the model as an API Endpoint, we need a function that utilizes the model for future prediction. The function should:

  1. Take four numeric features as inputs.
  2. Load the random forest model iris_rf_model.rda.
  3. Make a prediction based on the four numeric inputs.
  4. Return the prediction.

For this demo, I wrote a function predict_iris (see below) and included it in an R script called use_model_as_API.R. (Note that, although this example uses R, publishing a Python function works the same way, if you prefer Python to R.)

## Load Library
library(caret)
library(randomForest)
 
## Load Pre-trained model
load(file = "iris_rf_model.rda")
 
## Create a function to take four numeric inputs
## and then return a prediction (setosa, versicolor or virginica)
predict_iris <- function(sepal_length, sepal_width, 
                         petal_length, petal_width) {
 
  ## Use the model for prediction
  y_pred <- predict(model, c(sepal_length, sepal_width, 
                             petal_length, petal_width))
 
  ## Return the predicted class
  return(as.character(y_pred))
 
}

Once you have this R script in the Domino project, you can link the API Endpoint to the function.

  1. First, go to the API Endpoints section.
  2. Enter the name of the script (use_model_as_API.R).
  3. Point it to the specific function (predict_iris).
  4. Click Publish to activate the API service.

That’s it! That is all you need to deploy your predictive model as a web service. Within a few minutes, your Iris model is live and accessible from the internet.

Making an API Call

OK, now we have the REST API. How do we use it?

At the bottom of the API Endpoints page, you can find code templates for languages such as Bash, Python, Ruby, PHP, JavaScript, and Java (other languages can follow a similar pattern).

Let us go through a Python example here. You will need to replace the API Endpoint URL and the Domino API key with your own.

import unirest
import json
import yaml

## Four numeric inputs
X1 = 5.6; X2 = 3.2; X3 = 1.7; X4 = 0.8;

response = unirest.post("YOUR_API_Endpoint_URL",
    headers={
        "X-Domino-Api-Key": "YOUR_API_KEY",
        "Content-Type": "application/json"
    },
    params=json.dumps({
        "parameters": [ X1, X2, X3, X4 ]
    })
)

## Extract information from response
response_data = yaml.load(response.raw_body)

## Print Result only
print "Predicted:"
print response_data['result']

If successful, running this Python script should give you an output like this:

Predicted:
['setosa']

Ready to Try?

When you are ready to give this a try, you can do it in four steps:

  1. Sign up for a free account at Domino Data Lab.
  2. Go to my project folder for this demo.
  3. Click the Fork this project button on the control panel (and again when you see the pop-up window).

  1. Publish your own API endpoint as mentioned above (see section €œDeploying the Model as REST API€).
  2. Sit back. Relax. Your API will be ready in a few minutes.

Domino provides some other nice features designed for deploying predictive models as APIs. For example, it automatically keeps a revisioned snapshot of your code and data each time you update your published models, in case you need to roll back. It also lets you schedule recurring €œtraining tasks€ that can re-train your models on large datasets and automatically deploy your updated models to your API.

Conclusions

For me, the key benefit of using a third-party MaaS is that I do not have to worry about technical issues like server uptime, or marshaling JSON into or out of my code. I can focus my time and energy on developing better predictive models.

I believe this functionality can enable data scientists and software engineers to better collaborate, by easily integrating sophisticated models to create richer, more intelligent software systems.
 

Jo-fai (Joe) Chow Data Scientist at @VirginMedia and @DominoDataLab. Hydroinformatics EngD @STREAMstreamer. Wannabe Photographer. Blogger http://bit.ly/blenditbayes

Comments