If you want to install TensorFlow with GPU currently you have two choices: either do your own manual install (good luck with that) or use a Docker image. If you install the TensorFlow GPU docker image, it is almost plug and play: you can start coding, almost immediately, in a Jupyter notebook with all TensorFlow GPU libraries installed. But what if you want to use your own libraries on top of it? And how do you access your own files?

In this post I will explain exactly that. I assume the reader has Docker already installed and knows the very basics of Docker (for example knows Part I and Part II of the Docker Get Started).

Docker has several advantages: it is portable, allows for version control and reproducibility and it is very efficient (more than a virtual machine). However, it requires some setup which is not straightforward.

Once you have installed the Docker image for GPU, following this intro, you would like to be able to run the Docker image in bash mode, not the Jupyter, with access to your files and with access to the corresponding ports. Inside that image, you would like to install your own libraries, and be able to recover it for the future, as Dockers are ephemeral and isolated. I will explain how to do all that.