This week Microsoft announced the latest updates to its Cognitive Services Speech SDK, now in version 1.3. Included in the update is the selection of the input microphone by using the AudioConfig class, beta support for Unity and new sample code.
New to the SDK is the ability for it to use the AudioConfig class in order to support a selection of the input microphone. For developers this means they can now stream audio data to the service without being locked into the default microphone.
The SDK has been extended to offer support for Unity, now in beta. Unity will be supported on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86).
This update also makes more sample code available. These samples include: samples for AudioConfig.FromMicrophoneInput, Python samples for intent recognition and translation, and samples for using the Connection object in iOS. Developers can check out the sample repository for a complete list of sample code.
Interested developers can check out the announcement for a full listing of the improvements and bug changes.