What if I told you there is a tool that will help you to:

Improve the quality of your Alexa skill

Increase the chances your skill will get good reviews

Enhance the engagement and retention of your users

And most impressive: you don’t need to have deep technical experience to use it.

Presenting Bespoken Virtual Device Test Scripts

With this tool you can create human-readable test scripts to perform end-to-end or regression testing for the Alexa skills (Google Actions support coming soon!).

Through our extensive experience in the voice-first space, we have found voice test scripts to be a lifesaver when it comes to ensuring skill quality. What is a voice test script? It is a series of interactions whose purpose is to verify your skill functionality.

How to Test Alexa Skills and Improve Skill Quality

Running test scripts for your skill allows you to detect issues or unexpected behaviors before releasing the voice app and reaching your users. In other words, assuring the quality of your skill prevents having 1-star reviews and improves the experience of your users. That means more engagement, and potentially more retention (of course this will also depend on the content offered by your voice app).

You may be testing Alexa skills already by talking to your device or using EchoSim.io — but when using Bespoken’s test scripts, you can eliminate all that manual work. Write your tests easily using our simple scripting language, and run them whenever you want!

Tutorial: How to Create Alexa Test Scripts Using Bespoken Virtual Device

Bring! Shopping List is an excellent Alexa skill to manage your grocery store lists. The skill is super intuitive and very well done, it has an Alexa skill available in English US/GB and German. It also has an Android and iPhone app and a Web Interface, so you can check your lists whenever and wherever you go. We love it and that’s why we are going to use it to show you how to get started with End-to-end (e2e) testing.

The main functionalities Bring! offers are:

from Amazon’s skill page

Lets focus on the first three: Add/remove items from the list and read the contents of it.

Based on these three key functionalities we create our test plan: three test scripts, which as you can imagine are linked to three intents in the skill (we can also create one script to test the Launch Request).

Installing Bespoken Virtual Device

To be able to use Bespoken’s automatic Voice Test Scripts you first need to install our CLI. To install, run this on your command line (you need NPM first, get it here):

$ npm install -g bespoken-tools

Setting Up the Voice Test Scripts Project

In order to make it easy to maintain, we recommend creating a project for your test scripts. You can use your favorite IDE. This is what our demo looks like:

As you can see we have a folder for each locale, inside each folder we have the test scripts for each intent (or functionality).

Then we have the testing.json file, which is very important, let’s look at its content:

In this file, we tell Bespoken Tools this is an End-to-end (type: e2e) test script, so it should use our Virtual Device to process the skill interactions.

To help you get started please download or clone this e2e test project from our GitHub repository.

There is no need to change any of the parameters shown in the example.

Creating End-to-end Test Scripts

This is what a test script looks like:

It begins with a configuration section where you define your locale, the Polly voice to use when doing Text-To-Speech (check the available voices here; some voices work better than others in certain locales) and a token which is used to process the interactions of this script with a specific Virtual Device (you will need a different Virtual Device token per each locale you are testing).

Then we have sequences of interactions, each sequence represents a test scenario, meaning a piece of functionality we want to verify.

As you will notice there are two parts on each line separated by a colon (:), the left part is the utterance we execute against the skill. Then, the right part is the expected result (also called the transcript or prompt).

When we run this script we will execute each intent and compare the actual result with the expected one, and then show the outcome of the test.

The syntax we use is based on YAML and allows to have several nice things when defining the transcripts like:

Wildcards : By default, we do a partial match. What does this mean? If the expected result is “welcome to my skill” and the actual is “hi, welcome to my skill, hope you enjoy it”, the test will succeed. It will also succeed if we have “welcome * skill”. This is a very useful way to focus only on specific words or phrases when running the tests.

: By default, we do a partial match. What does this mean? If the expected result is “welcome to my skill” and the actual is “hi, welcome to my skill, hope you enjoy it”, the test will succeed. It will also succeed if we have “welcome skill”. This is a very useful way to focus only on specific words or phrases when running the tests. Lists: Knowing it’s a good practice to make skills conversational, it’s common to have variable responses — any of them should succeed when comparing it with the actual response. To allow this we have lists. For example, the next image shows the transcript (expected response) with several valid options, if the actual response has any of them, the test will succeed.

Cards: If your skill generates a card in the response, we can also test that. It is as simple as adding the card fields and the expected result in the transcript like this example:

As you can see, the Card object is located below the prompt. Take into account that here the match is exact, so capital letters and punctuation count. To add a change of line use “

” as shown in the example.

Executing the Voice Test Scripts

You can run the test scripts one by one or an entire set at once. For example, taking into account the sample project, we can run the “Launch Intent” for the German locale test scripts like this (open the command line at the root of your test project):

$ bst test de-DE\launchRequest.e2e.yml

The result of the execution looks like the image below (green means success, red means there was an issue):

If you prefer to run the entire set of scripts for the German locale you can execute this:

$ bst test de-DE\

Or execute all the scripts for all the locales:

$ bst test

Conclusion

Now it is time for you to create test scripts for your Alexa skills. If you come across a problem don’t hesitate to contact us, we will be happy to help.

This post has a focus on e2e testing. But there is much more, and we at Bespoken are devoted to making it easy for you to cover all the four layers of testing:

We make Alexa skill testing tools for each of these layers. Visit bespoken.io to learn more about!.

P.S. Create a Bespoken Dashboard

Don’t forget to create your own virtual devices. The tokens we used in this example are just for demonstration. To do real testing, you will need to create a Dashboard. Let me show you how.

The test scripts will run on a Bespoken virtual device. This virtual device is created through the Bespoken Dashboard, that means we need first to create an account there, it’s pretty easy and you’ll have a lot of benefits that I will let you know about at the end of this section.

Once you have signed up, add a source for your voice app by clicking the big + sign, add a name, and click on the “Validate your new skill >>” button. You will see a page like this:

The goal here is to get an Authorization token from Amazon so the Bespoken Virtual Device we are going to create can access your skills and run the test scripts.

To get your authorization token click on the “Get validation token” link. You will be asked if you want Bespoken Virtual Device to access your skills, just say yes. The token will be automatically retrieved.

Note: If your skill has account linking and uses the device address, you should update the Bespoken Virtual Device address through Alexa website.

Now that you have created a Bespoken Dashboard account it will be a crime not to use our Monitoring services:

As you can see we have three columns there, an input (i.e. “open bring”), an expected result (“welcome * how can i help”), and the actual result. This is very similar to what we have seen earlier in this article. Actually, this tool uses our Virtual Device as well.

Fill in the rows with utterances that test the most important functionalities of your skill and enable monitoring (red icon in the upper right corner). You will be notified when the test script has failed. The test will be run every 30 minutes. This is a great way to detect a problem before your users and take action to avoid losing engagement.

Now that you are familiar with a few different ways to test Alexa skills, give it a try today.