Google has announced updates to the Cloud Speech API which include the addition of word-level timestamps, support for long-form audio files up to three hours long, and support for 30 additional languages.
The Google Cloud Speech API has been updated to feature word-level timestamps meaning that timestamp information is now available for each word in the transcript. Timestamps make it possible to map the audio to the text based on time so that users can jump to the point when the text was spoken in the audio. Timestamps can also be used to display the corresponding text throughout audio playback.
"Now with Google Cloud Speech API timestamps, we can accurately analyze phone call conversations between two individuals with real-time speech-to-text transcription, helping our customers drive business impact," said VoxImplant CEO Alexey Aylarov, in a prepared statement for Google. "The ability to easily find the place in a call when something was said using timestamps makes Cloud Speech API much more useful and will save our customers' time."
The API has also been updated to support audio files up three hours long, and support for files longer than three hours is available on a case-by-case basis. In addition, the API now supports an additional 30 languages including Bengali, Latvian, and Swahili.
The Google Cloud Speech API update is just the latest in a string of new APIs and API updates from Google in 2017. In March, the company announced the Cloud Video Intelligence API, a machine learning-powered API that can annotate Google-stored video content at the video, shot, and frame level. In April, Google announced the general availability of its neural machine translation system, the system being available via the Google Cloud Translation API. In June, the company released the TensorFlow Object Detection API, a new API that provides access to an object detection model building framework built on top of Google's TensorFlow.
For more information about Google Cloud APIs, visit the official Google Cloud Platform Site.