Google continues to update ML Kit, the company’s flagship technology to enable developers to create applications for mobile devices that utilize the power of machine learning. The company reported on Tuesday at the Google I/O developer conference that improvements in ML Kit now make it possible for mobile devices to perform machine learning functions even when disconnected from the internet.
ML Kit is Google’s set of APIs and code libraries provided to developers for implementing machine learning capabilities on mobile devices. The ML Kit SDKTrack this Framework/Library adds a new layer of simplicity for creating programs for mobile and IoT devices. Using ML Kit developers can take advantage of a mobile device’s camera and computing resources to implement vision analysis and natural language processing features.
The Vision capabilities in ML Kit improve the ability of mobile devices to perform mission-critical tasks such as text recognition, barcode scanning, image labeling, landmark detection and face recognition faster and with greater accuracy and reliability. The SDK’s Natural Language components provide applications with the capability to recognize spoken words and phrases and then translate them into text quickly with accuracy rates over 90%, if not greater. New features in ML Kit such as the On-device Translation API allows programmers access to offline models for that can translate text into 58 languages, at rates faster than those previously possible. Translation models in ML Kit are presently used in Google Translate.
Other new features such as Object Detection and the Tracking API makes it possible for mobile-centric applications to track the objects, such as a person dancing in a live camera feed, in real time. The technology is presently being used by the retailer, IKEA, to allow its customers to photograph an item of interest in real time and then automatically reference it directly in the IKEA catalog.
Also, Object Detection makes it possible for mobile devices to be context-aware. For example, users can take a photograph and then use an ML Kit enabled app on the phone to inspect the photograph to determine the objects in the photo as well as the probable context in which the objects are present.
Proponents of the ML Kit assert that the simplicity and power that the technology brings to mobile computing will usher in a new class of intelligent mobile applications. Also, they assert that the SDK will change the way that mobile applications will be designed and programmed moving forward.