How to Get Started With Google Actions

Voice enabled applications are widely expected to see a lot of action this year with Amazon’s Echo and Google’s Home devices likely to get more user acceptance. As a developer, it is important to tap into this ecosystem early enough to help create the optimum voice experiences as the field matures.

In this article, we are going to take a look at Google Actions, the platform for creating actions on on the Google Assistant application. The Assistant software is currently available on Google Home, their competitor device to Amazon’s Echo and also in Android applications like Google Allo, the Assistant in Google Pixel phone and others. Future plans are for Assistant to work across multiple other devices and apps.

Writing Google Actions for Google Home is similar to writing Alexa Skills for the Amazon Echo. We had earlier covered an in-depth tutorial on Getting Started with Amazon’s  Alexa Skills Kit.

Google Actions

Google opened up the Google Assistant platform for developers in December and currently the platform supports building out Conversation Actions for the Google Home device. It is widely expected that the same Actions will eventually be available across Google’s other devices and applications.

Image Credit : https://2.bp.blogspot.com/-VS-d7KNyxu4/WEmHkpH8WEI/AAAAAAAACUk/Yocw0Nkq-tsl3NPusNgeZMXtTcNNEQn0ACLcB/s640/numberg_b3_6-05.png

A Conversation Action is straightforward to understand. A user talks to your action and your action will service that request. The response can be provided by an application that is developed and hosted by you, in any programming language or platform. Here is a diagram of the request/response interaction with a Conversation API:

Image Credit: https://developers.google.com/actions/images/conversation-api.png

To help build out Conversation Actions, Google has provided extensive documentation along with Design guides on how to create a great conversational experience. There are two ways in which you can build out Conversation Actions: API.AI and the Actions SDK.

In this article, we are going to take a look at API.AI, a powerful platform to create Conversational User Experiences. The API.AI platform can be a good base to create your Actions, since you can make use of its various integrations with services like Google Home, Facebook, Slack and others. As a result of that, your investment in the platform can be used to integrate your Actions across multiple platforms available today.

API.AI Platform

Google acquired API.AI in September last year and it is positioned as one of the ways to build out conversational experiences. The goal of API.AI platform is to enable anyone to build out these experiences in a simple way with the focus on the experience. Behind the scenes it ties together Machine Learning support for multiple domains like Finance, Weather, News, etc. and Integrations that help you connect the experience to your target platform. If you are targeting a Bot platform or your own service/application, API.AI offers a gentle learning curve and provides a Machine Learning environment that the Agent can learn from making it a choice worth exploring. Google Actions are fully supported by the API.AI platform as one of the integrations that it supports.

You can get started with API.AI for free. The sign up is straightforward and we suggest that you do so if you are planning on following the rest of the article in a hands-on manner. Before we begin, it is important to understand some key concepts in API.AI. This will help us in the later sections.

In API.AI, our conversational experience or application that we are going to write is centered around the concept of an Agent. API.AI requires that we create an Agent, which is the interface between the user and thefunctionality that you wish to invoke as part of fullfilling the request that was sent by the user to the Agent.

The Agent can receive commands from the user, either by text or voice (depending on the user interface/interaction) and maps the request to a list of Intents. An Agent is capable of handling a list of intents, where an intent is what the user wants it to do. For example. “Give me the latest news” can be interpreted as an Intent to get the latest news. If the Agent, which has the News Intent is mapped to this natural language request, then the API.AI platform will invoke the Intent. Behind the scene the Intent will execute an Action that will give back a response to the User.

The Actions can be connected to your own Services/Applications via a webhook and via Integrations your Agent can be invoked and interacted with through multiple Bot platforms like Slack, Facebook and even voice enabled devices like Google Home.

What we are going to build

In this article we are going to write a Google Action and test it out on Google Home. We will build an Agent that will provide us information on Population Statistics, which is powered by the Population.io API. Our intent here will be to show how the Agent can be designed and demonstrate its ability to interpret natural language.

Romin Irani Romin loves learning about new technologies and teaching it to others. His passion is to help developers succeed.

Comments

Comments(2)

girishlal

I followed the tutorial and build the app. But in simulator I always get "Sorry, this action is not available in simulation”,any ideas you can suggest to resolve that? ( A.API and Simulator google account are same)

romin

I have faced this issue (occasionally) and have found it to be one of the following:

  1. A temporary problem with the service for testing that caused the problem.
  2. Multiple Google Accounts. I used to ensure that I was logged out then of all my accounts and logged in with the particular Google Account that I wanted to use.

Thanks,

Romin