Sponsor me on Patreon to support more content like this.

Part 1 - Introduction - Getting started

Part 2 - Offline support and debugging (todo)

Part 3 - Scheduled Lambdas (todo)

Part 4 - Generating a PDF in Lambda using layers (todo)

Part 5 - Storing files in S3 and sending emails with SES (todo)

Repo linked at the bottom of the article.

Introduction

This is an updated version of part one of my series on creating a start-up, using serverless technologies and Golang.

Unfortunately, when I came to writing part 2, after a break from technical writing (my apologies). The world had changed somewhat, and technologies had changed. I also learnt a lot since writing the original part 1, having using Lambdas in production very heavily over the past few months in my new job.

To recap, the thing we're building is a work tracker for freelancers, which will allow a user to track time, and automatically generate an invoice, totalling up the hours and working out the money owed for a period of work. Something I could really use in my personal life, hopefully you'll find this useful, too.

The purpose of this series, is to demonstrate the journey of building a real-world use-case, using Serverless technologies, and, of-course, golang.

In this updated version, I will create three serverless services, clients, sprints and items. Clients will represent people, or companies we do work for. Sprints will represent periods of time, in which a payment should be added up for. Finally, items represent individual tickets of billable work.

We will add up the work automatically for each sprint, compare the total hours against the rate set for that client, and create a PDF, with the total on a monthly basis.

In this part, we'll add our basic endpoints, and complete the basic data structure. I spent a lot of time and research attempting to find an architecture I liked for a Lambda/Golang project. So hopefully you'll find that useful, too.

Directory structure

First we have a functions folder, which will contain the entrypoint to our functions. We've broken those down by their functionality, or domain. We've included a model directory in each function directory, which contains the model and the repository for that function.

Then we have a pkg directory, which, according to the suggested go project layout, should contain re-usable code. Code that can be used externally.

In there we have a datastore package, which deals with connection management, and some common abstractions over DynamoDB in our case. Finally, a 'http' package, which contains some common functionality for dealing with http requests in Lambdas. Including some type aliases for the request/response structs in the api gateway library, as they're somewhat verbose in their naming conventions.

So let's start going through one of the functions to begin with:

package main import ( "log" "net/http" "os" "github.com/EwanValentine/invoicely/functions/clients/model" "github.com/EwanValentine/invoicely/pkg/datastore" helpers "github.com/EwanValentine/invoicely/pkg/http" "github.com/aws/aws-lambda-go/lambda" ) // ClientRepository - type ClientRepository interface { Get(id string) (*model.Client, error) List() (*[]model.Client, error) Store(client *model.Client) error } // Handler - type Handler struct { repository ClientRepository } // Store a resource func (h *Handler) Store(request helpers.Req) (helpers.Res, error) { var client *model.Client if err := helpers.ParseBody(request, &client); err != nil { return helpers.ErrResponse(err, http.StatusBadRequest) } if err := h.repository.Store(client); err != nil { return helpers.ErrResponse(err, http.StatusInternalServerError) } return helpers.Response(map[string]bool{ "success": true, }, http.StatusCreated) } // Get a single resource func (h *Handler) Get(id string, request helpers.Req) (helpers.Res, error) { client, err := h.repository.Get(id) if err != nil { return helpers.ErrResponse(err, http.StatusNotFound) } return helpers.Response(map[string]interface{}{ "client": client, }, http.StatusOK) } // List resources func (h *Handler) List(request helpers.Req) (helpers.Res, error) { clients, err := h.repository.List() if err != nil { return helpers.ErrResponse(err, http.StatusNotFound) } return helpers.Response(map[string]interface{}{ "clients": clients, }, http.StatusOK) } func main() { // Create a connection to the datastore, in this case, DynamoDB conn, err := datastore.CreateConnection(os.Getenv("REGION")) if err != nil { log.Panic(err) } // Create a new Dynamodb Table instance ddb := datastore.NewDynamoDB(conn, os.Getenv("DB_TABLE")) // Create a repository repository := model.NewClientRepository(ddb) // Create the handler instance, with the repository handler := &Handler{repository} // Pass the handler into the router router := helpers.Router(handler) // Start the Lambda process lambda.Start(router) }

I've left some comments in the code itself, but it should be fairly self-explanatory. We have three actual functions, Store, Get, and List. We created a Router function in our http helper package, which will route a GET request, with no id parameter set, to List, a GET request with an id parameter set to Get, and finally, a POST request to Store. If you can find a way to re-use the same lambda function for a few related tasks, it can save you from having to write some overly verbose, repetative code.

I also added some tests for these endpoints:

package main import ( "net/http" "testing" "github.com/EwanValentine/invoicely/functions/clients/model" httpdelivery "github.com/EwanValentine/invoicely/pkg/http" "github.com/stretchr/testify/assert" ) type MockClientRepository struct{} func (r *MockClientRepository) Get(id string) (*model.Client, error) { return &model.Client{ ID: "123", Name: "some client", Rate: 40, Description: "Some client!", }, nil } func (r *MockClientRepository) Store(*model.Client) error { return nil } func (r *MockClientRepository) List() (*[]model.Client, error) { return &[]model.Client{ model.Client{ ID: "123", Name: "some client", Rate: 40, Description: "Some client!", }, }, nil } func TestCanFetchClient(t *testing.T) { request := httpdelivery.Req{ HTTPMethod: "GET", PathParameters: map[string]string{"id": "123"}, } h := &Handler{&MockClientRepository{}} router := httpdelivery.Router(h) response, err := router(request) assert.NoError(t, err) assert.Equal(t, http.StatusOK, response.StatusCode) } func TestCanCreateClient(t *testing.T) { request := httpdelivery.Req{ HTTPMethod: "POST", Body: `{ "name": "test client", "description": "some test", "rate": 40 }`, } h := &Handler{&MockClientRepository{}} router := httpdelivery.Router(h) response, err := router(request) assert.NoError(t, err) assert.Equal(t, http.StatusCreated, response.StatusCode) } func TestCanListClients(t *testing.T) { request := httpdelivery.Req{ HTTPMethod: "GET", } h := &Handler{&MockClientRepository{}} router := httpdelivery.Router(h) response, err := router(request) assert.NoError(t, err) assert.Equal(t, http.StatusOK, response.StatusCode) } func TestHandleInvalidJSON(t *testing.T) { request := httpdelivery.Req{ HTTPMethod: "POST", Body: "", } h := &Handler{&MockClientRepository{}} router := httpdelivery.Router(h) response, err := router(request) assert.Error(t, err) assert.Equal(t, http.StatusBadRequest, response.StatusCode) }

We use Golangs interfaces, and the dependency injection pattern, to inject a fake version of our repository into an instance of the handler. We then pass in some fake requests, and test that we can get expected output. Including an error case.

Here's our model for that function:

package model // Client model type Client struct { ID string `json:"id"` Name string `json:"name"` Description string `json:"description"` Rate int32 `json:"rate"` }

And here's our repository:

package model import ( "github.com/EwanValentine/invoicely/pkg/datastore" uuid "github.com/satori/go.uuid" ) // NewClientRepository instance func NewClientRepository(ds datastore.Datastore) *ClientRepository { return &ClientRepository{datastore: ds} } // ClientRepository stores and fetches items type ClientRepository struct { datastore datastore.Datastore } // Get a single client func (r *ClientRepository) Get(id string) (*Client, error) { var client *Client if err := r.datastore.Get(id, &client); err != nil { return nil, err } return client, nil } // Store a new client func (r *ClientRepository) Store(client *Client) error { id := uuid.NewV4() client.ID = id.String() return r.datastore.Store(client) } // List all clients func (r *ClientRepository) List() (*[]Client, error) { var clients *[]Client if err := r.datastore.List(&clients); err != nil { return nil, err } return clients, nil }

Again, fairly straight-forward. We're taking an instance of our datastore, and passing in our domain specific models. The datastore package, is generic, it just deals with interfaces, and common DynamoDB functionality. It has no awareness of our core models. So our repository deals with translating the results into our specific models.

Now let's start taking a look at our pkg files. First up, with have our dynamodb adapter, which you can find in pkg/datastore/dynamodb.go . There you'll see:

package datastore import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/dynamodb" "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute" ) // CreateConnection to dynamodb func CreateConnection(region string) (*dynamodb.DynamoDB, error) { sess, err := session.NewSession(&aws.Config{ Region: aws.String(region)}, ) if err != nil { return nil, err } return dynamodb.New(sess), nil } // DynamoDB is a concrete implementation // to interface with common DynamoDB operations type DynamoDB struct { table string conn *dynamodb.DynamoDB } // NewDynamoDB - creates new dynamodb instance func NewDynamoDB(conn *dynamodb.DynamoDB, table string) *DynamoDB { return &DynamoDB{ conn: conn, table: table, } } // List gets a collection of resources func (ddb *DynamoDB) List(castTo interface{}) error { results, err := ddb.conn.Scan(&dynamodb.ScanInput{ TableName: aws.String(ddb.table), }) if err != nil { return err } if err := dynamodbattribute.UnmarshalListOfMaps(results.Items, &castTo); err != nil { return err } return nil } // Store a new Item func (ddb *DynamoDB) Store(item interface{}) error { av, err := dynamodbattribute.MarshalMap(item) if err != nil { return err } input := &dynamodb.PutItemInput{ Item: av, TableName: aws.String(ddb.table), } _, err = ddb.conn.PutItem(input) if err != nil { return err } return err } // Get an item func (ddb *DynamoDB) Get(key string, castTo interface{}) error { result, err := ddb.conn.GetItem(&dynamodb.GetItemInput{ TableName: aws.String(ddb.table), Key: map[string]*dynamodb.AttributeValue{ "id": { S: aws.String(key), }, }, }) if err != nil { return err } if err := dynamodbattribute.UnmarshalMap(result.Item, &castTo); err != nil { return err } return nil }

Some of AWS's libraries, can be a little verbose, especially the DynamoDB SDK. Writing out these common functions over and over again, can get repetitive. So we want to abstract this behaviour and make it re-usable.

We're passing in a variable named castTo , which is an interface{}. When we call these functions, instead of returning the result, we pass in a pointer, and set the value of the pointer. This allows us to specify the type of the result in the repository function. Which in turn allows us to keep this functionality generic, and re-usable. Pretty handy! I guess it's similar to have you'd use the encoding/json functionality. I.e json.Marshal(data, &item) , item is a pointer, which you pass in to set the result to.

Finally, we have our http helpers:

package http import ( "encoding/json" "errors" "net/http" "github.com/aws/aws-lambda-go/events" ) // ResponseError - type ResponseError map[string]error // Req is an alias for an api gateway request type Req events.APIGatewayProxyRequest // Res is an alis for an api gateway response type Res events.APIGatewayProxyResponse // Response is a wrapper around the api gateway proxy response, which takes // a interface argument to be marshalled to json and returned, and an error code func Response(data interface{}, code int) (Res, error) { body, _ := json.Marshal(data) return Res{ Body: string(body), StatusCode: code, }, nil } // ErrResponse returns an error in a specified format func ErrResponse(err error, code int) (Res, error) { data := map[string]string{ "err": err.Error(), } body, _ := json.Marshal(data) return Res{ Body: string(body), StatusCode: code, }, err } // RestHandler represents a RESTful Lambda handler type RestHandler interface { Get(id string, request Req) (Res, error) Store(request Req) (Res, error) List(request Req) (Res, error) } // ParseBody takes the body from the request, parses the json to a given struct pointer func ParseBody(request Req, castTo interface{}) error { return json.Unmarshal([]byte(request.Body), &castTo) } // RequestHandleFunc is an alias for an api gateway request signature type RequestHandleFunc func(request Req) (Res, error) // Router routes restful endpoints to the correct method // GET without an ID in the path parameters, calls the List method, // GET with an ID calls the Get method, // POST calls the Store method. func Router(h RestHandler) RequestHandleFunc { return func(request Req) (Res, error) { switch request.HTTPMethod { case "GET": id := request.PathParameters["id"] if id != "" { return h.Get(id, request) } return h.List(request) case "POST": return h.Store(request) default: return ErrResponse(errors.New("method not allowed"), http.StatusMethodNotAllowed) } } }

I've left comments throughout this code, but here we mostly have helper functions and type aliases, which abstracts some common behaviours. Such as the router we talked about briefly in our function code. We have a few type aliases for requests and responses. Again, because the AWS SDK is a little... wordy at times.

I've also updated the serverless.yml file, to flesh out our new functions:

service: invoicely provider: name: aws runtime: go1.x region: eu-west-1 environment: REGION: "eu-west-1" iamRoleStatements: - Effect: Allow Action: - dynamodb:Query - dynamodb:Scan - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem Resource: "arn:aws:dynamodb:eu-west-1:*:table/*" package: exclude: - ./** include: - ./bin/** functions: create-client: handler: bin/clients environment: DB_TABLE: Clients events: - http: path: clients method: post cors: true fetch-clients: handler: bin/clients environment: DB_TABLE: Clients events: - http: path: clients method: get cors: true fetch-client: handler: bin/clients environment: DB_TABLE: Clients events: - http: path: clients/{id} method: get cors: true create-sprint: handler: bin/sprints environment: DB_TABLE: Sprints events: - http: path: sprints method: post cors: true fetch-sprints: handler: bin/sprints environment: DB_TABLE: Sprints events: - http: path: sprints method: get cors: true fetch-sprint: handler: bin/sprints environment: DB_TABLE: Sprints events: - http: path: sprints/{id} method: get cors: true create-item: handler: bin/items environment: DB_TABLE: Items events: - http: path: items method: post cors: true fetch-items: handler: bin/items environment: DB_TABLE: Items events: - http: path: items method: get cors: true fetch-item: handler: bin/items environment: DB_TABLE: Items events: - http: path: items/{id} method: get cors: true resources: Resources: clientsTable: Type: AWS::DynamoDB::Table Properties: TableName: Clients AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 sprintsTable: Type: AWS::DynamoDB::Table Properties: TableName: Sprints AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 itemsTable: Type: AWS::DynamoDB::Table Properties: TableName: Items AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1

Hopefully this makes sense, but I'll go through it briefly anyway. First of all we declare our serverless's projects top level configuration, such as what language we're using, and what permissions are required. In our case, mostly DynamoDB.

Under the hood, Serverless creates a Cloudformation stack, and much of the configuration in this yaml file, is actually Cloudformation.

Next we have our functions, two GET requests, one with an id parameter in the url, and finally a POST request. There are three urls, which map to a single function, for each enitity, or business model. Interally, the router we created in our http helpers code, maps these three different routes to the three handlers (Get, List, Store) in our Lambda function handler.

Then at the bottom, under resources , we have some Cloudformation configuration, which defines our DynamoDB tables. I love being able to define the infrastructure for our services, as part of the same repo. It's really good practice to group anything related to a specific feature or service together in the same project/repo. So the ability to add any Cloudformation to our serverless.yml file, is really powerful. This is great for microservices for example.

Finally, I've updated the Makefile , to add the new functions.

GOBUILD=env GOOS=linux go build -ldflags="-s -w" -o build: go get -u go mod vendor $(GOBUILD) bin/clients functions/clients/main.go $(GOBUILD) bin/items functions/items/main.go $(GOBUILD) bin/sprints functions/sprints/main.go test: go test ./... deploy: sls deploy

So now you should be able to do $ make build . Which will build all of our functions into go binaries, which are then referenced in our serverless config. Once those have built, we can run $ make deploy or $ sls deploy if you're not using the Makefile.

You should see some output such as the following:

This will build all of our functions behind api-gateway. Create some permissions needed to talk to DynamoDB, and finally, it will create out DynamoDB tables. Pretty cool!

Once that's deployed, you should now be able to make a request to each of those endpoints. For example, to create a new client:

$ curl -XPOST --url https://<lambda-gateway-url>/clients \ --header 'Content-Type: application/json' \ --data '{ "name": "This is a test", "rate": 40 }'

Fetch all clients:

$ curl --url https://<lambda-gateway-url>/clients \ --header 'Content-Type: application/json'

Or to fetch a specific client:

$ curl --url https://<lambda-gateway->url>/clients/<id> \ -h 'application/json'

Hopefully this should demonstrate how to create a few basic services using Golang, DynamoDB and Lambda. Next time in this series, we'll look at adding offline support, for local testing. Including a local DynamoDB instance. So we can test with test data, too!

Full repository can be found here.

Sponsor me on Patreon to support more content like this.

If you found this series useful, and you use an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. Cheers! https://monzo.me/ewanvalentine