This approach harbors a few risks.

If you build your application each time before you deploy, there might be a chance that the artifact you tested is not the same as the one that enters production.

It is important to realize that a build process isn’t always deterministic. Even if the same code enters the build it’s not guaranteed that the same artifact comes out.

The artifact doesn’t just depend on your code — it also depends on third-party libraries, operating system updates or other environmental changes that happen over time.

Just think back to the times before we had a package.lock.json . Back then the builds were even less predictive. Third party libraries often appear with a ^ inside our package.json . Despite semver , new versions of third-party library sometimes produced incompatibility with another dependency which caused your application to break.

The more consistency that you can keep between deployments, the more likely the production deployment is to go smoothly.

Continuous delivery and the environment files provided from Angular CLI do not fit!

In continuous delivery, your artifact needs to get environment specific configurations at startup or at runtime. The Angular CLI, on the other hand, gets those configurations at build time.

So how to combine the ideas from continuous delivery with our Angular application?

There are different approaches to combine the ideas of continuous delivery with the Angular CLI. Each one comes with advantages and downsides.

Let’s have a look at them.

Provide environment configuration over a REST endpoint

The Angular application doesn’t contain the possibility to access runtime environment because it runs in the browser.

But in most cases our frontend does not come alone, it also needs some backend services where it fetches data from or pushes data to.

And guess what, the backend has access to environment variables. So let’s use that to our advantage and fetch our environment specific configurations from the backend.

All we need on the backend side is a REST endpoint that delivers the configurations. Depending on your backend the way of accessing environment variables differs. So let’s focus on the Angular part.

Let’s build ourself a ConfigurationService which fetches the configurations.

The backend delivers us a configuration object with three properties. resourceServerA , resourceServerB and a stage which we load via a standard HTTP request. Nothing fancy.

We use some RxJS shareReplay operator to build a caching behavior for the configuration. With this approach, we prevent the creation of another XHR request when we call again loadConfigurations again. Each new subscriber gets the cached configuration.

A complex environment can require dynamic configuration. Configuration values which may or may not change at runtime, for example, feature toggles. In such scenarios, the caching strategy used above needs to be extended.

Nice! This approach accesses the configurations from a backend. But what if we are not in control of the backend. Let’s say we are only responsible for the frontend and access some external backend services. So we can not influence the backend.

In such cases, we would need to build ourselves a backend service that delivers our SPA and also provides the REST endpoint to read the configurations.

But, we want to keep our setup lightweight. We only want a simple web server that delivers our SPA.

Host configurations as assets — mount configuration files per environment

Instead of fetching the configuration over a REST endpoint we directly fetch a JSON file with configurations which lies in our assets folder.

So let’s create a config folder inside assets and put a JSON with the local environment specific configurations in it.

But how do we load the configuration.json ? Almost in the same way as we did before. The only difference is that we do not fetch from the REST endpoint but from the assets folder.

Awesome! The ConfigurationAssetsLoaderService now loads our assets file which contains all configurations. But how do we change those configurations in relation to the current environment we are in?

We will merely host our configuration on each stage and then mount the configuration.json file into the assets folder. When your pod starts, we mount your configuration into the assets/config directory.

It is important to notice that you create a config folder inside the assets . folder. We can not use a flat hierarchy. When performing a mount, all files inside the mounted folder will be deleted.

The concrete way on how to mount volumes depends on your CI tool. We at Trasier use OpenShift for our deployments. OpenShift provides us with ConfigMaps which can either be a property or even an entire configuration file. So on Openshift, we have different stages. Each stage then hosts specific configurations and mounts those configurations into our assets/config folder on the pod startup.

Ok. Great! We have now seen two approaches whos client-side implementation is very similar. We created a service that will fetch configurations either from a REST endpoint or from the assets folder.

So we have seen two ways on how to use a service to access configurations. But when do we call those services?

Well, short answer, it’s up to you. There are different times where it makes sense to call them. Each one comes with pros and cons.

When to fetch configurations?

Call it as soon as you need it

In this approach, we call the loadConfigurations method of the ConfigurationService as soon as we need it. For example on a click that triggers a request to resourceServerA .

Notice that the first time we do so the HTTP request to resourceServerA waits until the request to our /configuration endpoint finishes. All the subsequent requests then work as usual as they get the cached configuration.

Call it in our App component

Similar to the approach above you can initially fetch the configurations inside the constructor of your AppComponent . This approach is especially useful when you display an initial screen that doesn’t require any server data.

Again the configurations will be fetched. All subsequent subscribers then get the cached configurations.

Call it during app initialization

Angular allows us to call functions during app initialization. To do so, we take advantage of the APP_INITIALIZER token.

We provide the APP_INITIALIZER token in combination with a factory method. The factory function that is called during app initialization must return a function which returns a promise.

The factory method returns a function that calls the loadConfiguration function which fetches the configuration from the backend.

This approach comes with one downside. Even though the initial request to fetch the configurations should be fast, it is still blocking the startup of your application until the XHR request finishes.

So as you see, you have different ways to call the service. There’s still one more approach which doesn’t use a service at all.

Override configurations per environment

In this example, we use Angulars environment files as they come. A environment.ts and a environment.prod.ts .

Even though we have more stages than just production and development we only distinguish between those two. For local development, we use the environment.ts file. All the other stages are handled by environment.prod.ts .

But how?

Our environment.prod.ts does not contain the actual values; it contains placeholder values which will be overwritten per stage by a build script.

An example environment.prod.ts file could look like this.

export const environment = {

resourceServerA: 'REPLACED_BY_BUILD_RESOURCEA',

resourceServerB: 'REPLACED_BY_BUILD_RESOURCEB',

stage: 'REPLACED_BY_BUILD_STAGE',

};

When we start our web server we can then use a custom start.sh script which will replace the placeholders.

We then execute this script at startup. For example inside our Dockerfile.

This approach, at least to me, feels kind of “hacky”. Overwriting strings in a bundle is probably not the most delightful way. It furthermore harbors the risk of overwriting something which should not be overwritten.

If you still decide to use this approach, it is super important to have good placeholders. Use some special characters which you usually do not use in variable names.

Conclusion

Angular comes with environment files that allow us to handle environment specific configurations. They do not meet the requirements of a continuous delivery setup.

Angulars environment files are used during build time. In continuous delivery, it is essential that we deploy the same artifact on different stages. Therefore we need to pass in environment configurations at startup or runtime.

Depending on your setup we can load configurations via a service. Either directly from a backend or from our assets folder.

When doing so it’s good practice to cache them.

🙏 Please give some claps by clicking on the clap 👏🏻 button below if you enjoyed this post.‍‍🧞

Claps help other people finding it and encourage me to write more posts

Feel free to check out some of my other articles about Front End development.