We’re starting to explore monorepositories. Not in a big, “let’s chuck all our services into one repo” kind of way, but in a smaller, “let’s merge some related services to save some cognitive overhead”, kind of way. The stack driving our decisions: we mostly deploy Lambdas to AWS using CircleCI and a splash of CloudFormation, we write our functions in TypeScript, and Yarn has been our dependency management weapon of choice since day one.

Our goal is to cut down the number of repositories we have to manage while retaining the flexibility of different tooling and deployment strategies for each service. We also want to enable easier code sharing between related services but maintain separate build pipelines to avoid redeploying services unnecessarily. It would be also nice to stay clear of the 200-resource limit for CloudFormation stacks and 250 MB size-limit for Lambdas bundles.

Here’s a template repository I built based on my findings, for reference: https://github.com/volta-charging/yarn-circleci-monorepo

Our beautiful directory structure, (ignore the tsconfig error in the root)

In case you’re too lazy to click that link like I am, this is a screenshot of the directory structure we’ll be working with. Note that there is a package.json and tsconfig.json at the top level and in the service subdirectory. There’s also only yarn.lock in the root and a template.yaml in the service directory. This demonstrates how the repository is unified from a Yarn and TypeScript perspective, but separate from a Lambda perspective.

Yarn Workspaces

As the backbone of bigger tools, such as Lerna, Workspaces provides a good basis for managing multiple services in one repo without adding the constraints of a more comprehensive solution…like Lerna. With Workspaces alone we get some great features:

Dependencies are installed and versioned per service

Services can have their own yarn scripts

We can import one service into another like a regular Node module, (I like to call those packages for clarity)

It’s basically just Yarn packages within a Yarn package so there’s not too much mind-blowing stuff to discuss, but there are a few gotchas about our implementation to call out:

We use nohoist because we want to deploy services separately and need all of their dependencies to stay in their service directories. Allowing Workspaces to hoist shared dependencies to the root means shared dependencies will only exist in the root so we would have to pull them back into each service during bundling or deployment. Alternatively, we could’ve used Webpack to ensure the final bundle had all the right source required, but nohoist is a simple solution for now.

Shared logic should be its own Workspace package, or needs to be symlinked into each service directory. This isn’t necessarily a bad thing; we can follow semantic versioning and even publish the package with Yarn’s help, but it is more complex than just writing a utils.ts file and importing it everywhere. Since, like with nohoist , we need all source files inside each service directory and the only way to get it there is by adding it as a dependency or symlinking it. Technically, either choice we make is still symlinking because that’s what Yarn does internally for shared Workspace packages.

CircleCI Magic

The real novel stuff comes in how we use and abuse CircleCI to help us conditionally build and deploy individual services in a repository. If you haven’t used CircleCI before, I’d suggest getting comfortable with aliases, (YAML anchors), and commands to better understand the pieces we built.

Our configuration can be boiled down to the following for simplicity:

workflows:

service-a:

jobs:

build:

steps:

- step

- step

...

service-b:

jobs:

build:

steps:

...

...

We keep our config.yml DRY by extracting reusable steps as aliases and reusable sets of steps as commands . With these building blocks, a job often references a command which has some steps referenced as aliases . Note that these jobs are service-specific, so we’ll always specify a working_directory at the top of each job to ensure all job commands run relative to that directory.

Working Directory

The first “trick” is not really a trick, but it’s simple and powerful. Each job we run needs to be service-specific, so the working_directory we specify at the start of each job defines exactly that:

working_directory: ~/yarn-cicleci-monorepo/services/service-a

Thanks to this trick, every command run in this job is specific to that directory. To make that more clear: each directory will be treated pretty much like a separate repository from a build and deployment standpoint. That means all dependencies need to be inside that service’s node_modules directory, which they are thanks to nohoist , and each service must define its own template.yaml , (for services that require AWS-based resources to be deployed). It also means the next two tricks we use only need to worry about one service at a time.

Build/Deploy Flags

This flag will prevent us from running unnecessary builds and shipping redundant deployments, so it’s probably the most important trick of all. First, we’ll try to restore a flag for that version of the service, second we’ll bail early if it exists; otherwise, third, we’ll create the flag at the end of the job. Try to ignore the other steps, for now, we’ll get to them later. See that config file in all its glory in the template repository. Here are the dirty details:

restore-build-flag is an alias that runs after checking out the repository. The only thing in this cache is a file called build.flag , and it’s based entirely off of the package.json . That means, if package.json hasn’t changed since the last deployment, the flag will be restored and the build will be skipped.

restore_cache:

keys:

— build-flag-{{ checksum “package.json” }}

test-build-flag is an alias that can be run right after the build flag is restored. If build.flag exists, CircleCI will skip the rest of this job.

run:

name: Exit if build flag exists

command: |

FILE=build.flag

if test -f “$FILE”; then

echo “$FILE exist”

circleci step halt

fi

save-build-flag is a command that is run at the end of a successful build to prevent the service from being rebuilt and deployed.

save-build-flag:

steps:

— run:

name: Create build flag

command: touch build.flag

— save_cache:

paths:

— build.flag

key:

build-flag-{{ checksum “package.json” }

Dependency Cache

We can also use CircleCI’s cache mechanism the way it was meant to be used: for caching our /node_modules/ . The following steps are run for each service independently:

CircleCI cache often captures the /node_modules/ directory and is built from the yarn.lock file, which we don't have in each service directory thanks to Yarn Workspaces. Fortunately, we can generate them. Here's how it works:

generate-lock-file is an alias that runs after the build flag has been tested.

run:

name: Generate lock file

command: yarn generate-lock-entry >> yarn.lock

restore-cache is an alias which will pull in /node_modules/ based on the yarn.lock . This saved a lot of time in builds during the yarn install step.

restore_cache:

keys:

- dependencies-cache-{{ checksum "yarn.lock" }}

save-cache is an alias that should be run after yarn install has done the heavy lifting for us, but before yarn install --production has pruned out all those valuable devDependencies .

save_cache:

paths:

- node_modules

key: dependencies-cache-{{ checksum "yarn.lock" }}

Gotchas

There are a few things to keep in mind with this solution:

We’re using the cache as a build and deploy flag which it wasn’t designed for, there could be unforeseen complications in the future Our job control is limited so more complex workflows may not be able to use the same test-build-flag strategy package.json must change to bust the build flag cache; generally speaking, that means bumping the version of the service at the least CircleCI’s API is limited

- We can’t tell Circle that we skipped the build, it assumes the step succeeded which is a little misleading

- We can no longer simply rerun workflows that succeeded due to the build flag, we have to go bump the service’s version

None of these seem too unreasonable and many, such as bumping the package version, are good practice to adhere to in general. I hope to see the usage of this strategy evolve and welcome all feedback!