A few years back Google introduced ML Kit, an SDK that lets mobile developers add machine learning to their apps. Several months back Google released a standalone version of ML Kit which gave developers the ability to create AI-assisted apps directly on devices without requiring Firebase. Recently, Google announced two new APIs would be added to ML Kit, Entity Extraction and Selfie Segmentation.
The Entity Extraction API can recognize and find entities from static text as well as while a user is typing. The API includes support for 15 different languages and 11 different entities including addresses, dates, emails, phone numbers and more. With this API, developers should be able to create richer in-app experiences by not only understanding text, but by letting specific actions be performed on it.
Supported entities at launch include:
- Address (350 third street, cambridge)
- Date-Time* (12/12/2020, tomorrow at 3pm) (let’s meet tomorrow at 6pm)
- Email (firstname.lastname@example.org)
- Flight Number* (LX37)
- IBAN* (CH52 0483 0000 0000 0000 9)
- ISBN* (978-1101904190)
- Money (including currency)* ($12, 25USD)
- Payment Card* (4111 1111 1111 1111)
- Phone Number ((555) 225-3556, 12345)
- Tracking Number* (1Z204E380338943508)
- URL (www.google.com, https://en.wikipedia.org/wiki/Platypus, seznam.cz)
The API includes support for both iOS and Android and is currently in beta.
The new Selfie Segmentation API will let developers separate the background from an image and add effects to the image or insert it into a different background.
With Selfie Segmentation, users can separate images from the background and add effects (credit: Google)
Some of the key capabilities of the API include:
- Allow developers to replace or blur a user’s background
- Work with single or multiple people
- Cross-platform support (iOS and Android)
- Runs real-time on most modern phones
Interested developers can join the early access program and request access to the API by filling out this form.