Microsoft continued its push into its Cognitive Services portfolio by making three more of its Cognitive Services APIs generally available. The Face API, Computer Vision API, and Content Moderator API were announced as generally available at yesterday's Microsoft Data Amp online event. The APIs represent three of the 20+ APIs that make up Microsoft's entire Cognitive Services offering.
"Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural methods of communication," the Cognitive Services Team commented in a blog post announcement. "We have made adding intelligent features to your platforms easier."
Prior to the GA release of the Face API, the API could detect and deliver certain attributes (i.e. age, gender, facial points, and headpose). An additional feature added to the API with the GA release is emotions. For complete details on the GA version of the Face API, check out the documentation.
Microsoft also added additional features to the Computer Vision API for the GA release. The API now includes landmark recognition (i.e. a model that recognizes 9,000 natural and man-made landmarks from around the globe), and handwriting OCR (i.e. a feature that detects handwritten text and extracts recognized characters into a machine-readable form). The handwriting OCR feature can detect characters from notes, letters, essays, whiteboards, forms, and much more. You can test out the Computer Vision API via interactive demonstrations here.
The Content Moderator API delivers machine assisted text and image moderation. The moderation is augmented with actual human review tools. Additionally, video moderation is available in preview within Azure Media Services.
Microsoft has provided quick start guides in C#, Java, Python and more. To get developers' creative juices flowing, Microsoft has published a number of customer stories. To get started, or reviewing pricing, check the overview site.