Marcin Baraniecki — Frontend Lion

Untangling nested dependencies with ' .yarnclean'

I recently stumbled upon a gripping case of different versions of the same dependency (types library) being used in the same project. It actually caused a TypeScript’s compiler build error, becoming a major showstopper.

Using TypeScript, the type definitions are installed under node_modules/@types directory (eg. node_modules/@types/react ). So far so good. After upgrading a few of them, this is what happened (I will be using exemplary package names):

@types/package-A was installed in version 1.5.0

was installed in version 1.5.0 @types/package-B was installed in version 1.5.0

was installed in version 1.5.0 @types/package-B depends on @types/package-B , but for some (still mysterious) reason this package ended up with it’s own, nested structure of node_modules/@types/package-A , with package-A in some old, outdated version (say 1.2.0), effectively “looking” at the wrong typings and breaking the TypeScript compilation.

After some investigation, it turned out that none of the packages should come with its’ own, nested node_modules directory — instead, the structure of the project’s dependencies should always be “flat”. Using yarn package manager, this can be solved easily — with the .yarnclean file (located in the same directory as the project’s package.json file).

The contents of the .yarnclean :

Now it was only a matter of re-installing the dependencies with yarn command. Nested node_modules under @types directory disappeared, and so did all unwanted, conflicting versions of the dependencies. The build process was back to normal!

Oh, and if you use the .yarnclean , don’t forget to add it to your version control, just as you added the yarn.lock file!

Adam Smolarek — Senior Software Engineer

Is wget always the same?

When using AWS, you have to define health check for a container. One easy way to do so for services with REST API is to use wget, but there is a catch, there are 2 wgets depending on image that you use. For alpine Linux (popular with container deployment) it is wget from busy box, but for Debian it is “standard” wget, and there are differences between them.

Wget by default downloads content of the page you requested, but here is a way to prevent it by using --spider option which as documentation says:

“When invoked with this option, Wget will behave as a Web spider.”

Sounds good, but there is a catch. It works differently for Alpine Linux and standard distribution. The first difference is when you call wget --version in case of Alpine Linux ( docker run -it openjdk:8-jre-alpine /bin/sh ) you will get:

wget: unrecognized option: version

BusyBox v1.29.3 (2019–01–24 07:45:07 UTC) multi-call binary.

However, on Debian ( docker run -it openjdk:8-jre-stretch /bin/sh ), you will see:

GNU Wget 1.18 built on linux-gnu.

Which indicates that we are dealing with two different pieces of software and differences do not end on lack of support for --version in case of Alpine Linux.

There is also a difference in the way that --spider works.

In the case of wget for Debian, which is more strict, instead of sending a GET request to the server it sends a HEAD request. Which is fine when you are using, for example, Akka HTTP (description here) as there is no need for additional work on the server side, but with http4s you have to handle another method — HEAD.

In case of Alpine Linux when using --spider wget will send GET request which is what you would expect.

I learned this during debugging a service that was failing health check on one container, but not on the other.

As usual, it was not a bug in the kernel ;)

Maciek Opała — Senior Software Engineer

Encrypting a volume attached to a running AWS instance

Recently, to improve the security of our CI/CD pipeline, I needed to encrypt the volume attached to an AWS instance on which our CI/CD server was deployed and running. Doing it wasn’t evident (apparent?) since it’s impossible to encrypt a volume of a running instance. Below you can find steps that should be taken to encrypt an AWS volume tied to a working instance. Remember that all AWS components (encryption key, volume and instance) must be defined in the same AWS region.

Generating the encryption key

First of all you need an encryption key that will be used to encrypt the volume.

Login into your AWS account. Go to IAM, Encryption keys and then select the appropriate region where you’d like to generate the key, then create a key. You need to provide:

Alias Name (required)

(required) Tag (optional)

(optional) Key Administrators — IAM users and roles that are enabled to administer the key with KMS API (what’s important here: you can mark the key as not being enabled to be deleted)

— IAM users and roles that are enabled to administer the key with KMS API (what’s important here: you can mark the key as not being enabled to be deleted) Key Usage Permissions — IAM users and roles that can use this key to encrypt and decrypt data from within applications and when using AWS services integrated with KMS

4. Verify the key details and finally create the key.

After creating an encryption key, you’re ready to encrypt the volume.

Volume encryption