Google's ML Kit: Machine Learning For Mobile Made Easy

Mountain View - At its I/O developer conference Google today debuted ML Kit, a new SDK with five core APIs that gives mobile developers the power to add machine learning to their apps using Firebase. If you're looking to add some smarts to your apps, this is the tool you need.

"We want Machine Learning to be a thing," said Brahim Elbouchikhi, Product Manager at Google. That's why the company decided to build its own machine learning package and tie it up in a tidy SDK. ML Kit is meant to help developers skip the heavy computation and let Google take care of the tough math. The idea is to let developers spend more time to being creative.

"Even for the seasoned expert, adapting and optimizing models to run on mobile devices can be a huge undertaking. Beyond the machine learning complexities, sourcing training data can be an expensive and time consuming process, especially when considering a global audience," continued Elbouchikhi.

ML Kit is an SDK with five core APIs: text recognition, face detection, barcode scanning, image labeling, and landmark recognition. Google expects to add two more APIs over time, a smart reply API, and a face detection API.

Google says developers need only pass their data into ML Kit, which will kick back an "intuitive response."

There are two sets, including on-device APIs and cloud-based APIs. The on-device APIs work offline, but have a more limited date set from which to draw answers. For example, Elbouchikhi said the label detection API has 430 images it can recognize in offline mode and more than 10,000 when parsing the cloud. It's free for developers to add machine learning to their apps in offline mode. Accessing the cloud-based data will cost some coin, though Google didn't specify how it will charge. Either way, Google says responses are quick.

It will be easy to test your models. "Experimentation is essential to machine learning development," says Elbouchikhi. "Firebase Remote Config let’s you experiment with multiple custom models, dynamically switch values in your app, making it a great fit to swap the custom models you want your users to use on the fly." Because the APIs are plug-and-play, developer won't have to adjust their computations.

Google envisions this will work well with camera apps, dieting and fitness apps, and other apps that rely on imagery to label items and conjure up other information. For example, an users will be able to take a picture of their meal and receive a more accurate assessment of the calorie count and other nutrition data.

All ready have some machine learning smarts of your own? Google says developers will be able to deploy their own TensorFlow Lite models. Devs can upload them via Firebase and Google will host and serve the models to app users.

Last, Google is working on some server-based compression techniques to keep APK sizes down. Machine learning can add lots of size to mobile apps. Google's new compression technique can squash apps down by a factor of 10. The company is still testing this, however, and couldn't say how much it will cost when released.

For now, Google says developers can snag ML Kit via Firebase. It's available today.

Be sure to read the next Mobile article: Google Adds Fresh Round of APIs to Android P