A brief tutorial to have an Alexa Skill up-and-running using Quarkus, Terraform and AWS Lambda

Witing Alexa skill can be very fun as it is quite simple to implement custom logics using Java Alexa SDK. In this article we will see how to create a simple skill using Quarkus, AWS Lambda and Terraform.

Why?

There are many ways of providing an Alexa skill; you could even use different languages such as Python or Javascript, provide a Skill logics using a normal HTTPS endpoint, provision your backend by copying the code somewhere in your remote server… My personal motivation was:

I do not like dynamic languages that much, so… let’s go for Java! AWS Lambdas are really cheap, as you pay as you go. Especially at beginning you might not expect such a big traffic, whereas over time your skill may be so successful that you will have to scale out. Well, AWS lambdas do the job for you Dealing with infrastructure can be really tricky as it is very easy to lose track of you have implemented on it. Infrastructure as Code may come to help and Terraform is quite an enoughly simple tool for that purpose

What you need to know

If you do not even have a rough idea of any of the following:

you will be in trouble while proceeding with what follows. If so, given that I do not know how and why you came here, maybe it would be worth get prepared with the above topics before putting all together.

Starting from scratch

As a first step, we have to create a basic structure of our project. Luckily, Quarkus natively supports AWS Lambdas using Maven archetypes or the fans of Gradle, sorry, it is not fully supported yet).

As reported in Quarkus documentation:

mvn archetype:generate \

-DarchetypeGroupId=io.quarkus \

-DarchetypeArtifactId=quarkus-amazon-lambda-archetype \

-DarchetypeVersion=1.0.1.Final

Just choose your group and artifact id and your base package and you have a basic skeleton for your Lambda code.

Your project has the following structure

├── create-native.sh

├── create.sh

├── delete-native.sh

├── delete.sh

├── invoke-native.sh

├── invoke.sh

├── payload.json

├── pom.xml

├── src

│ ├── assembly

│ │ └── zip.xml

│ ├── main

│ │ ├── java

│ │ │ └── io

│ │ │ └── mirko

│ │ │ ├── InputObject.java

│ │ │ ├── OutputObject.java

│ │ │ ├── ProcessingService.java

│ │ │ ├── TestLambda.java

│ │ │ └── UnusedLambda.java

│ │ └── resources

│ │ └── application.properties

│ └── test

│ └── java

│ └── io

│ └── mirko

├── update-native.sh

└── update.sh

Basically, you have:

A lot of *.sh files, which help you for your deployment into AWS. We will not use them, so you can safely remove them, unless you want to see how to deploy a Lambda using AWS CLI Your POM file ( pom.xml ). We will change this file to add layering support The assembly file ( zip.xml ). We will change this file to add layering support The file application.properties . This file contains, above all the configuration stuff you may decide to put in, the class name of your Alexa skill entry point

First and foremost: layering

Right now you might run mvn install to see what it produces. You will get a JAR file ( target/<artifact-name>-<artifact-version>-runner.jar that is above 4.0 MB. Not bad for a skill that does nothing… And we have not added the necessary dependencies for an Alexa skill yet!

Basically, Quarkus creates a fat JAR file, which is nice to keep things simple, but that may be inappopriate when adding more and more dependencies. After all, what you expectis to change your code very frequently, while keeping your dependencies once you decide what tools you need.

In addition to that, if you are not so lucky to have a very good bandwidth, deploying a very big archive may become a nightmare when deploying it in AWS, as you have to stay within 5 minutes, or it will fail.

A solution to this problem is to put all your dependencies in layers: a layer is a bunch of libraries that can be used by your Lambdas; in case of Java, it is organized as a ZIP file containing your JAR dependencies in the /java/lib directory. You can safely deploy your layer in an S3 storage and use it as a reference to build your layer (yes, in S3 you do not have the above 5 minutes timeout issue).

Creating your layer together with your thin Lambda JAR file requires some changes on your POM and assembly files, as follows:

Add the maven-assembly-plugin in your POM file in <project><build><plugins> section, as follows. It should be configured to be as deterministic as possible, as you do want to avoid unnecessary deployment and keeping your ZIP file unchanged will help you with Terraform Configure the /java/lib directory in the assembly file as the target directory for all your dependency Add the SDK for Alexa in the dependencies

Here it is an example of POM configuration for the Assembly plugin:

...

<plugin>

<groupId>org.apache.maven.plugins</groupId>

<artifactId>maven-assembly-plugin</artifactId>

<version>3.1.0</version>

<executions>

<execution>

<id>zip-assembly</id>

<phase>generate-resources</phase>

<goals>

<goal>single</goal>

</goals>

<configuration>

<finalName>function</finalName>

<descriptors>

<descriptor>src/assembly/zip.xml</descriptor>

</descriptors>

<attach>false</attach>

<appendAssemblyId>false</appendAssemblyId>

<outputTimestamp>2019-06-12T00:00:00Z</outputTimestamp> </configuration>

</execution>

</executions>

</plugin>

...

As you might see, I have forced the outputTimestamp : this should help create ZIP files in a deterministic fashion. You might also note that the package creation is associated to the resource generation phase. Well, it sounded mostly appropriate to me, but you are free to choose a different phase, just be aware on when Maven will generate your layer file.

Here it is the Assembly configuration file ( zip.xml )

Finally, trivially here we have the Alexa SDK dependency:

<dependency>

<groupId>com.amazon.alexa</groupId>

<artifactId>ask-sdk</artifactId>

<version>2.17.0</version>

</dependency>

<dependency>

<groupId>com.amazonaws</groupId>

<artifactId>aws-xray-recorder-sdk-aws-sdk</artifactId>

<version>1.1.2</version>

</dependency>

<dependency>

<groupId>com.amazonaws</groupId>

<artifactId>aws-xray-recorder-sdk-aws-sdk-instrumentor</artifactId>

<version>1.1.2</version>

</dependency>

In the list of dependencies we have included the following:

Alexa SDK XRay support libraries, in case you have to dig into your Lambda

You might not be interesting in using X-Ray at this point, but later in the development of your skill it may come in handy, especially if you are digging into performance issues.

Quarkus: the code

Now it is time to start implementing our Alexa skill.

As above mentioned, Quarkus expects that the application.properties file contains the Lambda bean identification in the quarkus.lambda.handler key.

It is mandatory and it must be a class that implements com.amazon.aws.services.lambda.runtime.RequestHandler , which is more than enough if you are implementing a Lambda that is serving “normal” HTTP requests. In case of Alexa Skill, you would like to extend a com.amazon.ask.SkillStreamHandler , but it is not compatible with Quarkus requirements.

After digging a bit, I have found out that this requirement comes from the Quarkus Lambda entry point, which is io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler . Basically, this class does two things:

Enforces the above annoying requirement Initializes the CDI engine

the CDI initialization is implemented by a static block in that class, so you can get rid of that class completely, provided that you invoke, in the static block of your real Lambda class (or, at least, before you serve the first Alexa request), the static block of the class io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler . This way you will be free of using the com.amazon.ask.SkillStreamHandler class with the benefit of CDI powered by Quarkus. Personally I have gone for copying the entire static block code inside my handler, which has the drawback of keeping it up-to-date with Quarkus library changes.

Unfortunately, the requirement of having a com.amazon.aws.services.lambda.runtime.RequestHandler is enforced by the CDI initialization in Quarkus, so you have to configure it, but it will not be invoked at all as we will specify a different entry point during the deployment, as we will see later.

Here it is my Quarkus request handler in all its glory. Since it will not be invoked, it can safely throw an exception in its handler method.

package io.mirko.lambda; import com.amazonaws.services.lambda.runtime.Context;

import com.amazonaws.services.lambda.runtime.RequestHandler; import javax.inject.Named;

public class QuarkusDelegateStreamLambda implements RequestHandler<byte[], byte[]> {

@Override

public byte[] handleRequest(byte[] request, Context context) {

throw new RuntimeException("UnreachableCode");

}

} @Named ("name_to_be_put_in_application_properties")public class QuarkusDelegateStreamLambda implements RequestHandler {public byte[] handleRequest(byte[] request, Context context) {throw new RuntimeException("UnreachableCode");

Instead, the real entry point will be something like the following:

package io.mirko.lambda; import com.amazon.ask.AlexaSkill;

import com.amazon.ask.SkillStreamHandler;

import com.amazon.ask.model.RequestEnvelope;

import com.amazon.ask.model.ResponseEnvelope;

import io.quarkus.arc.impl.ParameterizedTypeImpl;

import io.quarkus.runtime.Application; import javax.enterprise.inject.spi.Bean;

import javax.enterprise.inject.spi.BeanManager;

import javax.enterprise.inject.spi.CDI;

import javax.inject.Named;

import java.io.PrintWriter;

import java.io.StringWriter;

import java.lang.reflect.Type; public class ExampleStreamLambda extends SkillStreamHandler {

<static block that somehow triggers static block of io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler> private static <T> T getBean(Type t) {

if (!started) {

throw new IllegalStateException();

}

final BeanManager bm = CDI.current().getBeanManager();

//noinspection unchecked

Bean<T> bean = (Bean<T>) bm.getBeans(t).iterator().next();

//noinspection unchecked

return (T) bm.getReference(bean, t, bm.createCreationalContext(bean));

}

public ExampleStreamLambda() {

//noinspection unchecked

super((AlexaSkill<RequestEnvelope, ResponseEnvelope>)

getBean(new ParameterizedTypeImpl(AlexaSkill.class, RequestEnvelope.class, ResponseEnvelope.class)));

} @Named ("myStreamLambda")public ExampleStreamLambda() {//noinspection uncheckedsuper((AlexaSkill )getBean(new ParameterizedTypeImpl(AlexaSkill.class, RequestEnvelope.class, ResponseEnvelope.class)));

As we might see:

Then bean name has nothing to do with the Quarkus entry point name there is a mysterious getBean method: basically, it fetches the AlexaSkill bean from CDI. We will shortly see how to wire up all your beans

Let’s wire it up!

Now that we have our com.amazon.ask.SkillStreamHandler class, we are free to follow all the tutorials from AWS. Great. But here we want to use CDI to wire up all our components.

As you might learn if you follow AWS Alexa skills is that the com.amazon.ask.SkillStreamHandler needs a com.amazon.ask.AlexaSkill instance in its constructor (if you do not know what I am talking about, please have a look at AWS samples for Alexa); that instance is created by defining a set of implementations of class com.amazon.ask.dispatcher.request.handler.RequestHandler , but here we would like to use CDI to create it. We have to face some issues though:

The Lambda entry point is instantiated by AWS and thus it can not be a CDI bean, thus you can not inject any bean Over time you wil create many subclasses of com.amazon.ask.dispatcher.request.handler.RequestHandler and you would like to have all of them injected automatically

Let’s solve the second problem first: basically we want an com.amazon.ask.AlexaSkill that contains all your com.amazon.ask.dispatcher.request.handler.RequestHandler handlers automatically. Well, here it is the trick:

package io.mirko.lambda;



import com.amazon.ask.AlexaSkill;

import com.amazon.ask.Skills;

import com.amazon.ask.dispatcher.request.handler.HandlerInput;

import com.amazon.ask.dispatcher.request.handler.RequestHandler;

import com.amazon.ask.model.RequestEnvelope;

import com.amazon.ask.model.ResponseEnvelope;

import com.amazon.ask.request.interceptor.GenericRequestInterceptor;

import io.mirko.lambda.handlers.*;



import javax.enterprise.context.ApplicationScoped;

import javax.enterprise.inject.Instance;

import javax.enterprise.inject.Produces;

import javax.inject.Inject;

import javax.inject.Named;

import java.util.*;

import java.util.stream.StreamSupport;





public class SkillFactory {

@Inject

Instance<RequestHandler> handlers;



@Produces

@ApplicationScoped

@Named

public AlexaSkill<RequestEnvelope, ResponseEnvelope> createSkill() {

return Skills.standard() .addRequestHandlers(handlers.stream().toArray(RequestHandler[]::new))

// Add your skill id below

//.withSkillId("")

.build();

}

}

Basically, we use the CDI Produces annotation to defines a factory method that produces a bean of type com.amazon.ask.AlexaSkill . That factory method needs all the beans of type com.amazon.ask.dispatcher.request.handler.RequestHandler to create a skill, so we have to inject all the classes of type com.amazon.ask.dispatcher.request.handler.RequestHandler inside our factory class: this is implemented by the CDI Instance annotation.

Now you can safely define your own handlers as follows:

package io.mirko.lambda.handlers; import com.amazon.ask.dispatcher.request.handler.HandlerInput;

import com.amazon.ask.dispatcher.request.handler.impl.SessionEndedRequestHandler;

import com.amazon.ask.model.Response;

import com.amazon.ask.model.SessionEndedRequest;

import io.mirko.lambda.SessionManager; import javax.enterprise.context.ApplicationScoped;

import javax.inject.Inject;

import javax.inject.Named;

import java.util.Optional; @ApplicationScoped

@Named

public class SessionEndedHandler implements SessionEndedRequestHandler {

@Inject

SessionManager sessionManager; public class SessionEndedHandler implements SessionEndedRequestHandler {SessionManager sessionManager; @Override

public boolean canHandle(HandlerInput input, SessionEndedRequest request) {

return true;

} public boolean canHandle(HandlerInput input, SessionEndedRequest request) {return true; @Override

public Optional<Response> handle(HandlerInput handlerInput, SessionEndedRequest request) {

sessionManager.clear(handlerInput);

return handlerInput.getResponseBuilder().build();

}

} public Optional handle(HandlerInput handlerInput, SessionEndedRequest request) {sessionManager.clear(handlerInput);return handlerInput.getResponseBuilder().build();

This is an example of a handler that manages the end of a session. Please notice that is is defined as a standard CDI Bean ( Named annotation) and thus it can leverage all the CDI features, such as the injection of other beans.

Now that we have a com.amazon.ask.AlexaSkill bean, we have to inject it into your com.amazon.ask.SkillStreamHandler at construction time: here it comes at help the ExampleStreamStream.getBean method you have seen before: it invokes the CDI facilities to find a bean.

Beware: when fetching beans, CDI wants to know not only the class of your bean, but also the specific instance type in case of Generics: in this case you are an AlexaSkill<RequestEnvelope, ResponseEnvelope> and CDI wants to know that you want that specific instance type. Normally you can not mention a generic class because of type erasure, but let’s look at the javax.enterprise.inject.spi.BeanManager.getBeans signature:

public Set<Bean<?>> getBeans(Type beanType, Annotation... qualifiers);

As you might see, here you specify your bean type via java.lang.reflect.Type type. In our specific case, we have a java.lang.reflect.ParameterizedType , which is used to specify generic type instances. Quarkus provides an implementation of that interface, which is io.quarkus.arc.impl.ParameterizedTypeImpl , and we use it to refer to the specific AlexaSkill<RequestEnvelope, ResponseEnvelope> as follows:

new ParameterizedTypeImpl(AlexaSkill.class, RequestEnvelope.class, ResponseEnvelope.class)

As a result, we have:

defined all the necessary handlers using CDI defined our skill class for our handler using CDI implemented a dirty trick to inject our skill into the Alexa handler

Let’s deploy it!

Now it is time to deploy it in AWS (don’t you have an AWS account? Well, maybe this is the right time to create one). Here we need:

Model deployment of your Lambda. This is done in Alexa development console and I do not have much to say about this task Deploy the lambda code using Terraform

Let’s see what we need for the Lambda deployment in AWS using Terraform.

Firstly, your Lambda can not do whatever it wants in your infrastructure, but it must have a role that defines the subset of permissions.

This is the role definition:

{

"Version": "2012-10-17",

"Statement": [

{

"Action": "sts:AssumeRole",

"Principal": {

"Service": "lambda.amazonaws.com"

},

"Effect": "Allow",

"Sid": ""

}

]

}

and this is the Terraform code to create it:

data "template_file" "lambda_role" {

template = file("${path.module}/lambda.role")

}



resource "aws_iam_role" "lambda_role" {

name = "lambda_role"

assume_role_policy = data.template_file.lambda_role.rendered

}

Now we have to associate a policy that defines the permissions for your lambda and you need to associate it to the above role.

The policy file must also enable the Lambda to create log group for your logs, otherwise you will not be able to debug it as, in case of crashes, it would not be able to print the error anywhere.

Here it is an example of policy:

{

"Version": "2012-10-17",

"Statement": [

{

"Action": [

"logs:CreateLogGroup",

"logs:CreateLogStream",

"logs:PutLogEvents"

],

"Resource": "arn:aws:logs:*:*:*",

"Effect": "Allow"

},

{

"Action": [

"dynamodb:DescribeTable",

"dynamodb:GetItem",

"dynamodb:Query",

"dynamodb:Scan"

],

"Effect": "Allow",

"Resource": "${albums_table_arn}"

},

{

"Action": "*",

"Effect": "Allow",

"Resource": "${sessions_table_arn}"

}

]

}

(In this example I have a DynamoDB read-only table and a read-write DynamoDB table to be accessed by the Lambda. There is also the permission to use Cloudwatch to create logs).

Here it is the Terraform code for the policies and the log:

resource "aws_cloudwatch_log_group" "lambda_log_group" {

name = "/aws/lambda/my-lambda-name"

retention_in_days = 14

} data "template_file" "lambda_policy" {

template = file("${path.module}/lambda.policy")

vars = {

albums_table_arn = aws_dynamodb_table.albums-table.arn,

sessions_table_arn = aws_dynamodb_table.sessions.arn

}

} resource "aws_iam_policy" "lambda_logging" {

name = "lambda_logging"

path = "/"

description = "IAM policy for logging from a lambda"



policy = data.template_file.lambda_policy.rendered

} resource "aws_iam_role_policy_attachment" "lambda_logs" {

role = aws_iam_role.lambda_role.name

policy_arn = aws_iam_policy.lambda_logging.arn

} data "aws_iam_policy" "aws_xray_write_only_access" {

arn = "arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess"

}

resource "aws_iam_role_policy_attachment" "aws_xray_write_only_access" {

role = aws_iam_role.lambda_role.name

policy_arn = data.aws_iam_policy.aws_xray_write_only_access.arn

}

Here we just create a log group where we expect to find all the our Lambda logs. Moreover, we create the policy and we attach it to the above role.

Please notice that we also attach the XRay write-only standard policy in case you need to activate X-Ray on your lambda.

Now it is time to store your layer (remember? We keep the dependencies layer in a separate file). It is kept in an S3 storage and it will be referenced by your lambda when creating the layer.

Here it is the Terraform configuration to have your layer:

resource "aws_s3_bucket" "layer" {

bucket = "mirko-layer"

acl = "private"

force_destroy = "true"

region = var.s3_region versioning {

enabled = false

}

lifecycle_rule {

enabled = true



expiration {

days = 1

}

}

}



resource "aws_s3_bucket_object" "common_layer" {

bucket = aws_s3_bucket.layer.bucket

key = "lambda/layers/common.zip"

source = "${path.module}/alexa-layer.zip"

}



resource "aws_lambda_layer_version" "common_layer" {

layer_name = "common_layer"

s3_bucket = aws_s3_bucket_object.common_layer.bucket

s3_key = aws_s3_bucket_object.common_layer.key

s3_object_version = aws_s3_bucket_object.common_layer.version_id

description = "Common layer for my lambda"

compatible_runtimes = ["java8"]

}

Our S3 bucket is just needed to keep the layer for the time of layer creation, so it is not necessary to keep it for long: for this reason, we remove it after one day.

Please notice that we refer the ZIP layer file created by Maven as alexa-layer.zip .

Finally, we have to create our Lambda as follows:

resource "aws_lambda_function" "lambda" {

filename = "${path.module}/alexa-lambda.jar"

function_name = "my-lambda-name"

role = aws_iam_role.lambda_role.arn

handler = "io.mirko.lambda.ExampleStreamLambda::handleRequest"

timeout = 30

memory_size = 256

# The filebase64sha256() function is available in Terraform 0.11.12 and later

# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:

# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"

source_code_hash = filebase64sha256("${path.module}/alexa-lambda.jar")

layers = [aws_lambda_layer_version.common_layer.arn]



runtime = "java8"



tracing_config {

mode = "Active"

}

depends_on = [

"aws_iam_role_policy_attachment.lambda_logs",

"aws_iam_role_policy_attachment.aws_xray_write_only_access",

"aws_cloudwatch_log_group.lambda_log_group"

]



}



resource "aws_lambda_permission" "alexa-trigger" {

statement_id = "AllowExecutionFromAlexa"

action = "lambda:InvokeFunction"

function_name = aws_lambda_function.lambda.function_name

principal = "alexa-appkit.amazon.com"

}

Please notice that:

The file name for your lambda is lambda-alexa.jar . It must contain just your code, NOT your dependencies We check for file changes using the function filebase64sha256 , in order to avoid unnecessary deployments Beware of Lambda timeout: by default, it is 3 seconds, which is really a low value, especially in case of cold start of your lambda Finally, we have added the permission for Alexa to invoke our lambda ( alexa-trigger )

Conclusion

In this article we have created the necessary infrastructure to benefit from Quarkus and Terraform to create and deploy an Alexa Skill.

Many topics have not been covered, such as testing, properties injection, AWS components injection, and, more importantly, a GitHub project with the whole bunch of code.

Maybe in the next articles we will cover such missing points.