But now that date format no longer meets the requirements…

It can’t say March 7th 2020 — it should be 7th March 2020!

- return { time: moment().format('MMMM Do YYYY, h:mm:ss a') }

+ return { time: moment().format('Do MMMM YYYY, h:mm:ss a') }

I have to spend at least one full minute of my life waiting for a redeployment? For a one-line change. Outrageous.

Hot Deployment

In order for our Lambdas to just pick up some new code and keep on running, we will:

Package our services/routes into a single JS file with Parcel Upload that into a private S3 bucket Get our Lambdas to check that bucket for changes Grab and interpret new code in place of the existing services

Build

Packaging is simple. All of my routes live in a folder called /routes and I have a single /routes/index.js to bring them all together:

const getTime = require('./getTime') module.exports = {

getTime,

}

Install Parcel:

npm install --save-dev parcel

Add a script to your package.json which will run Parcel using routes/index.js as its entry point. This will generate our packaged bundle of code: dist/index.js

"scripts": {

"build": "parcel build routes/index.js

--target node

--bundle-node-modules

--no-source-maps",

...

}

Where we’re going, we don’t need source maps. I don’t think so anyway.

Running this script ( npm run build ) will give me a version of routes/index.js that has the entire tree of sub-modules packaged within it. For example, I used the Moment.js library, so the generated packaged file dist/index.js now contains my code along with all of that Moment.js code, so that the node_modules folder isn’t needed. It brings in the code for every require() or import .

Technically I’m only using tiny fraction of the Moment library, which makes it a bad choice — it doesn’t let me import only the modules I need. The date-fns library is more modular, or I might not need Moment at all, but I needed something of a decent size to demonstrate the concept. Parcel has experimental tree-shaking support to strip out code from external libraries that isn’t being used, but it isn’t stable yet.

S3 Bucket

I’ve added this to serverless.yml to create a new S3 bucket:

resources:

Resources:

HotSourceBucket:

Type: AWS::S3::Bucket

I’ve not given the bucket an explicit name (they have to be globally unique and I am unimaginative) so it will be automatically generated. In order to capture the generated bucket name so that we can use it later, I can export it. Here’s how I’m using the serverless-stack-output plugin to do that…

npm install serverless-stack-output Add this at the root level in serverless.yml:

plugins:

- serverless-stack-output custom:

output:

file: .build/stack.json

3. Also, add this to serverless.yml within our resources :

Outputs:

HotSourceBucketName:

Value:

Ref: HotSourceBucket

When we next use serverless deploy we’ll get the name of our new bucket in ./build/stack.json.

We can now write a script to upload our packaged dist/index.js into that bucket. It’s not very interesting. You can see it here: utils/uploadSource.js

We now need a function that will grab that source file and eval() within the Lambda instances: utils/getJSFromS3.js

It’s basically this…

// Get JavaScript file from S3 and eval it

const data = await s3.getObject(s3Params).promise()

const jsSource = data.Body.toString('utf8')

const result = nodeEval(jsSource, './' + sourceFilename)

return result

The full source does some simple LastModified check/cache so that the module isn’t downloaded again unless it has changed, to save some milliseconds.

So now our main Lambda handler function can be changed to look like this:

async function getTime(event, context, cb) {

const bucketName = process.env.BUCKET_NAME

const routes = await getJSFromS3(bucketName, 'index.js')

const simpleHttpHandler = routes.getTime

return simpleLambdaHandler(simpleHttpHandler, event, context, cb)

}

In order to populate that process.env.BUCKET_NAME environment variable with the name of our new bucket, we need this within the provider section of our serverless.yml:

environment:

# Get the ARN for the bucket that we have created

# e.g.“arn:aws:s3:::packaged-lambda-test-dev-hotsourcebucket-xxxx”

BUCKET_ARN: !GetAtt HotSourceBucket.Arn



# Get the name of the bucket we have created

# e.g. “packaged-lambda-test-dev-hotsourcebucket-xxxxxx”

BUCKET_NAME: !Ref HotSourceBucket

And in order for our Lambda function to be allowed access to the contents of our new bucket, we need something like this:

iamRoleStatements:

# Grant privilege to access S3 bucket

- Effect: Allow

Action:

- s3:GetObject

# ARN for bucket followed by /* = all objects within bucket

Resource: !Join ['', [!GetAtt HotSourceBucket.Arn, '/*']]