Amazon has announced an Alexa Voice Service (AVS) API update that allows developers to build voice-activated products. After integration, products respond to the Alexa wake word. Hands-free speech recognition and cloud endpointing (i.e. cloud-enabled automatic detection of end-of-user speech). The new features have been added to the existing API; accordingly, no need to upgrade.
To assist users with building with AVS, Amazon has released a Raspberry Pi prototyping project. Amazon partnered with Sensory and KITT.AI for the project to leverage their third party wake word engines. The simplicity of the project enables developers to get their feet wet and create a wake word-enabled Alexa prototype in less than an hour. To learn more about the project, visit the Alexa GitHub site.
Amazon now has a robust collection of developer resources and Amazon Alexa devices (e.g. AVS, Alexa Skills Kit, Amazon Echo, Amazon Tap, etc.). Additionally, Amazon has started building its partnership community with third-party Alexa-Enabled devices (i.e. Triby, CoWatch, Pebble Core, and Nucleus). To better understand how to AVS-enable your product, visit Amazon's Designing for AVS site. The site includes typical application examples, automatic speech recognition profiles, hardware and audio algorithms, third-party resources, and more.
Amazon encourages developers to share Alexa projects on Twitter: #avsDevs. Amazon will highlight favorite projects and publish interviews with developers. Those interested can keep up with projects and interviews at the Alexa Blog. Learn more about AVS in general at the Alexa site.