This article assumes you are familiar with the following things: ClojureScript, AWS Lambda, S3 and API Gateway. I'll only cover them very briefly.

I almost gave up.

You really don't want these in your CloudWatch logs:

"errorType": "TypeError",

"errorMessage": "Cannot read property 'cljs$core$IFn$_invoke$arity$1' of undefined",

How do you debug this inside a lambda runtime environment?

Short answer: you don't.

Long answer: Continue reading.

Can we reap the benefits of ClojureScript and the REPL to empower our serverless development game?

Here's my journey to a minimal working dev environment for AWS lambdas written in ClojureScript.

On this journey we'll be using Shadow-cljs (which I can't recommend enough) to compile our CLJS code into a Node.js lambda and make use of multiple build targets for playing and testing.

To keep the number of dependencies small, we won't venture into the various AWS CLJS libraries or any other third party software like Serverless or Claudia.js and are relying on judicious use of the aws command line instead.

For this tutorial, we'll be writing a lambda function, exposed on a public API endpoint that returns a list of our S3 buckets.

Prerequisites

You'll need

Clojure

(optional) Leiningen because I still haven't grokked deps)

Shadow-cljs which will compile and hot-reload our code

AWS CLI to upload our lambda code

Node.js for testing and debugging (and to use npm of course)

Create your first lambda function

For this article, we'll just follow the AWS guide to set up our first lambda. Call it lambda-test and pick Node.js 12 as runtime.

Also let AWS create a suitable role for the lambda.

You now have some callable "Hello World" JavaScript code:



exports . handler = async ( event ) => { // TODO implement const response = { statusCode : 200 , body : JSON . stringify ( ' Hello from Lambda! ' ), }; return response ; };

We'll soon be replacing this jumble of various syntax quirks with beautiful CLJS code, but first let's look at the bigger picture.

We have a lambda function, but we have yet to define how it can be called and how to get data into and out of it.

This is where triggers come in, which define what is actually in your event object and what return value is expected.

There are various ways to call lambdas, but the most straightforward way is to use the AWS API Gateway. This service will manage an HTTP endpoint for us and delegate HTTP calls to our code.

Let's add an API Gateway trigger. Find the Designer section for your new lambda function and click Add Trigger.

Pick API Gateway and Create API, HTTP API, Security to Open and enable metrics. (Reminder: this is minimalist, you can get a lot crazier with API Gateway, we'll stick to the basic stuff for now)

Hit create and the hard part is almost done.

You should see an API Gateway endpoint below the Designer section along with a link. Bookmark it. Hit it and you should see "Hello from Lambda!"

Yay us!

Sprinkle ClojureScript over it

We'll start from scratch. We need three things:

shadow-cljs.edn - our build configuration

package.json - our JavaScript dependencies

project.clj - a simple lein project to hold our CLJS dependencies (optional)

Find a free spot in your filesystem and open a shell.

$ npx create-cljs-project lambda-test

This gets us our root directory lambda-test , a simple package.json and shadow-cljs.edn along with some sub-directories.

Leiningen setup (optional)

Note: You can skip the whole project.clj setup if you don't use Cursive (which doesn't understand shadow-cljs dependencies and will complain a lot)

Let's add a simple project.clj to our root directory.



( defproject lambda-test "0.1.0-SNAPSHOT" :dependencies [[ thheller/shadow-cljs "2.8.94" ]] :plugins [] :source-paths [ "src/main" ] :test-paths [ "src/test" ] )

Run lein deps to check if everything is in order.

In order to make shadow-cljs understand our old-fashioned ways, we need to tweak shadow-cljs.edn . Throw out the source-paths and add :lein true .

Now Intellij/Cursive as well as shadow-cljs will pick up the CLJS dependencies from project.clj .

Defining build target

In order to create our lambda function, we are going to add our first build target to shadow-cljs.edn - a node.js library.

Edit shadow-cljs.edn :



{ :lein true :dependencies [] :builds { :lambda { :target :node-library :output-to "./dist/lambda/index.js" :exports { :handler lambda.main/handler } :compiler-options { :infer-externs :auto } } }}

This will make shadow create a node-library and export the function named handler - our main entry point.

We also ask the compiler to infer symbols in external libraries. This will be useful later as we are adding the AWS SDK.

Finally, let's add some code, compile this baby and send it TO ZE CLOUD!

Create src/main/lambda/main.cljs :



( ns lambda.main ) ; our main export ( defn handler [ event context callback ] ( do ( println event ) ;; somethin for the logs ( callback nil ( clj->js { :statusCode 200 :body "Hello from CLJS Lambda!" :headers {}}))))

This requires some explanation. The handler fn is our main entry point. In contrast to the async example earlier, we are using the 3-arity version, which gives us a callback to call whenever we are ready to finish the lambda execution.

We are returning a minimal JS object that the lambda runtime will turn into an HTTP response.

Compile it with

$ shadow-cljs release :lambda

This should generate a suitably cryptic dist/lambda/index.js

Updating the lambda function

This assumes you have your AWS CLI set up with a suitable profile. AWS recommends to create a user with a limited set of permissions to use when working with the AWS CLI.

We'll add a bit of convenience to package.json to help us with building and deploying. Edit package.json :



{ " name " : " lambda-test " , " version " : " 0.0.1 " , " private " : true , " devDependencies " : { " shadow-cljs " : " 2.8.94 " }, " dependencies " : {}, " scripts " : { " build " : " shadow-cljs release :lambda --debug " , " predeploy " : " npm run build " , " deploy " : " cd dist/lambda && zip lambda.zip index.js && aws --profile lambda lambda update-function-code --function-name lambda-test --zip-file fileb://lambda.zip " } }

The deploy script will bundle our lambda function into a zip file and upload it. Make sure your --profile setting is correct (I'm using a profile called lambda ).

$ npm run deploy

Through the magic of run-script prefixes in the more and more arcane package.json syntax, this will run the shadow-cljs compiler and then deploy the lambda.

If everything went right, you'll see some JSON output ending in LastUpdateStatus: Successful

What if things went awry?

In this case, here's a checklist:

Has the code been compiled? Check if there's a dist/lambda/index.js file

file Does your AWS CLI profile have the lambda:update-function-code permission? (If not, attach the AWSLambdaFullAccess policy)

permission? (If not, attach the policy) Have you washed your hands?

Now, open your API Endpoint again:

"Hello from CLJS Lambda!"

Oh yay, now we are talking!

Back in the AWS console, find the Monitoring tab and click on View logs in CloudWatch and check the log stream.

You'll find our (println event ) looking like this:

INFO #js {:version 2.0, :routeKey ANY /lambda-test, :rawPath /default/lambda-test, :rawQueryString ,....

Lots of juicy data hiding in that event object.

Which we will ignore for now :)

Adding the AWS SDK

Let's have the code do something useful(-ish).

Enter AWS-SDK, the ginormous JavaScript library to access AWS services.

Thanks to shadow-cljs, we can just add it to our project and :require it in our code:

$ npm install -D aws-sdk

Edit main.cljs and add some code to list all our S3 buckets:



( ns lambda.main ( :require [ "aws-sdk" :as AWS ]) ) ( def ^ js s3 ( new ( .-S3 AWS ))) ( defn list-buckets [ callback ] ( .listBuckets s3 callback ) ) ; our main export ( defn handler [ event context callback ] ( do ( println event ) ;; somethin for the logs ( list-buckets ( fn [ err buckets ] ( callback nil ( clj->js { :statusCode 200 :body ( .stringify js/JSON buckets ) :headers { "Content-Type" "application/json" }}))))))

Yes, this is mostly callback-hellish code, but it'll do for now (we could add Promesa or core.async to deal with this or use a nice wrapper library).

Note the ^js meta-data added to the s3 symbol. This will instruct shadow-cljs to not mangle the function names called on s3 (this works in conjunction with the :infer-externs :auto setting and is only needed when doing advanced compilation).

Let's deploy this and see what we got!

$ npm run deploy

now load the URL of our API endpoint.

{"message":"Access Denied",....

Drats! What happened?

AWS is making sure that not every half-assed lambda function can simply access your S3 buckets!

Adjust the lambda permissions

Go to your lambda configuration in the AWS console, find the Permissions tab and click on the Execution role name. A new tab should open.

Click on Attach Policies and find the AmazonS3ReadOnlyAccess policy and attach it.

After a couple seconds, open the API endpoint link again.

Aaaaand:



{ "Buckets" : [{ "Name" : "foo" , "CreationDate" : "2018-06-29T15:21:42.000Z" }, { "Name" : "bar" , "CreationDate" : "2018-06-29T15:18:10.000Z" }], "Owner" :{ "DisplayName" : "jochen" , "ID" : "n0n30fur833swax" }}

Success! You now made the names and IDs of all your buckets available on the internet!

(if that is a problem, quickly throw in a (filter) or restrict access more fine grained)

Note: if the output you are seeing is disappointing, consider creating an S3 bucket first

Now, this is all fine and dandy. And slow. With all this compiling and uploading, there's something lovely missing: The REPL and hot-reloading, shadow-cljs-style!

Debugging Lambda Functions

Now imagine, you have all of your 50 lines of CLJS code and there's a bug.

You'll get wonderful cryptic error messages (see beginning of the article) and maybe a stack trace, but without source mapping: good luck finding the offending s-expression!

Luckily, in many cases we can test and debug our code locally!

Let's add another build target to our shadow-cljs.edn file:



{ :lein true :dependencies [] :builds { :lambda { :target :node-library :output-to "./dist/lambda/index.js" :exports { :handler lambda.main/handler } :compiler-options { :infer-externs :auto } } :node { :target :node-script :output-to "./dist/node/index.js" :main lambda.main/start :devtools { :after-load lambda.main/reload } } }}

add two luxurious functions to main.cljs that get called on start and whenever shadow-cljs reloads your code:



( defn start [] ( pr "Started" )) ( defn reload [] ( pr "Reloaded" ))

and start watching the code changes:

$ shadow-cljs watch :node

Now shadow-cljs will re-compile and hot-deploy any changes.

To actually run the compiled code, let's start node (and set the AWS_PROFILE env variable to the correct profile).

Important: Make sure that the profile also has at least the AmazonS3ReadOnlyAccess policy attached!

$ AWS_PROFILE=lambda node dist/node/index.js

(if you get an error, make sure you wait until shadow-cljs has compiled the code).

Last missing piece, the REPL!

Several options here. On the command line:

$ shadow-cljs cljs-repl :node

will connect to the node process.

Or if you are using Intellij (or any nREPL client).

Note the port number in the shadow-cljs watch :node command:

shadow-cljs - nREPL server started on port 57756

Connect to the REPL on that port (now you are talking to a clojure instance that runs shadow-cljs code), and then use shadow magic to connect to your node instance:

(shadow/repl :node)

Once connected, you can now massage and run your code as usual:



( in-ns 'lambda.main ) ( handler nil nil ( fn [ err result ] ( js/console.log result )))

(check the output in the node session)

Change your lambda around, add bells and whistles, maybe some easter eggs and explore the AWS API.

Why is this working?

Luckily the AWS SDK picks up the necessary credentials (along with roles and permissions) through the AWS_PROFILE automatically. And the lambda code is also nothing more than a piece of JS deployed in a node.js system you have no control over, using the same SDK.

Final touches - which we should have started with: Tests

Let's add another build target to shadow-cljs to build and run our tests.

Edit shadow-cljs.edn :



{ :lein true :dependencies [] :builds { :lambda { :target :node-library :output-to "./dist/lambda/index.js" :exports { :handler lambda.main/handler } :compiler-options { :infer-externs :auto } } :node { :target :node-script :output-to "./dist/node/index.js" :main lambda.main/start :devtools { :after-load lambda.main/reload } } :test { :target :node-test :output-to "./build/test.js" } }}

Add a simple test in src/test/integration-test.cljs :



( ns integration-test ( :require [ cljs.test :refer ( deftest async is )] [ lambda.main :refer ( list-buckets )] )) ( deftest list-buckets-test ( async done ( list-buckets ( fn [ err buckets ] ( is ( nil? err ) ( str err )) ( is ( not= nil buckets )) ( done ) ))))

Noteworthy parts: We are using cljs.test and async to call done to finish the test, since everything in node is asynchronous (or should I say: stale? :)

To run our tests, build them and run them with an AWS_PROFILE set.

$ shadow-cljs compile :test

$ AWS_PROFILE=lambda node build/test.js



Testing integration-test Ran 1 tests containing 2 assertions. 0 failures, 0 errors.

I leave adding this to a snazzy npm run-script as an exercise to the reader.

There's a few more shadow-cljs options to discover around testing here.

Next steps

Now that we got code running, we can make this a bit more professional.

Add cloud formation instructions, move all the setup pieces to actual scripts so we don't need to use the AWS console, use the REST API gateway instead to have fine-grained control over the endpoint, add some stages to our API Gateway, so we can deploy a dev and prod version of the API, add Cognito for authentication, find better libraries to use in our CLJS code.

Wash our hands afterwards, of course.

Final words

You now have an INFINITELY SCALING, INFINITELY EXPENSIVE piece of code at hand that allows anyone who can guess your API endpoint URL to see a list of S3 buckets! Yay!

You can REPL and test it locally, then deploy it with a single command to the cloud.

Do let me know in how much trouble you got exposing your S3 buckets!

(Can I interest you in a Deserted Island Getaway Package, yes, yes?)

And let me know if these instructions worked. I tried my best to keep code and description in sync and I hope you'll have much success in writing lambda functions.

So much to learn. It never ends. But this blog post does. Now.

Goodbye.

You still here?

Leave a comment!