Project Harbor is another VMware initiative in the Cloud Native Apps space. In a nutshell, it allows you to store and distributes Docker images locally from within your own infrastructure. While Project Harbor provides security, identity and management of images, it also offers better performance by having the registry closer to the build and run environment for image transfers. Harbor also supports multiple deployments so that you can have images replicated between them for high availability. You can get more information (including the necessary components) about Project Harbor on github.

In this post, we will deploy Project Harbor in Photon OS, and then create some docker volumes on Virtual SAN using the docker volume driver for vSphere. This will provide an additional layer of availability for your registry and images, because if one of the physical hosts in your infrastructure hosting Project Harbor fails, there is still a full copy of the data available. Special thanks to Haining Henry Zhang of our Cloud Apps team for helping me understand this process.

I’m not going to explain how to get started with Project Harbor – my colleague Ryan Kelly has already done a really good job with that on his blog post here. But don’t do that just yet, as we have to make some changes to the configuration first for these VSAN Volumes.

One thing I will point out however is that the “docker-compose up -d” command initially failed on my setup with a freshly deployed Photon OS 1.0 GA (full) deployment:

ERROR: for proxy driver failed programming external connectivity \ on endpoint deploy_proxy_1 \ (0d440744b58f701bfe85657bd17a8bbe3fe455574a21494dfee793ea5d79b17e): \ Error starting userland proxy: listen tcp 0.0.0.0:80: listen: \ address already in use Traceback (most recent call last): File "<string>", line 3, in <module> File "compose/cli/main.py", line 63, in main AttributeError: 'ProjectError' object has no attribute 'msg' docker-compose returned -1

This was due to the httpd service already running on the full deployment of Photon OS. Using service httpd stop, I was then able to rerun the “docker-compose up -d” command and the deployment was able to succeed.

Using persistent volumes on VSAN for Project Harbor storage

By default, Harbor stores images on a local filesystem in the VM/appliance where it is launched. While you can use other storage back-ends instead of the local filesystem (e.g. S3, Openstack Swift, Ceph, etc), in this post, we want to use a docker volume that resides on the VSAN datastore. We can use the docker volume driver for vSphere to do just that. Once the driver (vmdk) has been installed on the Photon OS VM and ESXi hosts where we plan to run Project Harbor (see previous link), we need to do the following:

Step 1: Create 3 volumes on VSAN

There are 3 volumes needed for Project Harbor. There first is for the registry, the second for the mysql database, and the third is for the job service. As you can see below, no policies are set when we create our docker volumes on VSAN so that means that the default of FTT=1 (failures to tolerate = 1) is used for the volumes (a replica copy of the data is created on the VSAN cluster). If you want to use different policies, you can append the option “-o vsan-policy=” to the command line.

root@harbor-photon [ /workspace/harbor/Deploy ]# docker volume create \ --driver=vmdk --name registry-vsan -o size=20gb registry-vsan root@harbor-photon [ /workspace/harbor/Deploy ]# docker volume create \ --driver=vmdk --name mysql-vsan -o size=20gb mysql-vsan root@harbor-photon [ /workspace/harbor/Deploy ]# docker volume create \ --driver=vmdk --name jobservice-vsan -o size=20gb jobservice-vsan root@harbor-photon [ /workspace/harbor/Deploy ]# docker volume ls DRIVER VOLUME NAME vmdk jobservice-vsan vmdk mysql-vsan vmdk registry-vsan

By the way, I just chose a few small sizes for this demo. If you plan to have lots of images, you may need to consider a much larger size (TBs) for the registry volume. 20GB should be more than enough for the MYSQL volume as it should not grow beyond that. The jobservices volume is for replication logging, so we estimate that a few 100GB should be enough there.

Step 2: Update the docker-compose.yml

This file is used by docker-compose to setup the Project Harbor deployment. It is in the Deploy folder of Project harbor. We need to change the references to the volumes used by the services mentioned previously. Below you can see the default entries, and then what you need to change them to. We are simply changing the volumes from local filesystem (/data) to our new volumes just created above:

1. registry from volumes: - /data/registry:/storage - ./config/registry/:/etc/registry/ to volumes: - registry-vsan:/storage - ./config/registry/:/etc/registry/ 2. mysql from volumes: - /data/database:/var/lib/mysql to volumes: - mysql-vsan:/var/lib/mysql 3. jobservice from volumes: - /data/job_logs:/var/log/jobs - ./config/jobservice/app.conf:/etc/jobservice/app.conf to volumes: - jobservice-vsan:/var/log/jobs - ./config/jobservice/app.conf:/etc/jobservice/app.conf

There is one more change to be made to the docker-compose.yml file. We need to include a new section at the end of the .yml file to tell docker-compose about our new volumes, as follows:

volumes: registry-vsan: external: true mysql-vsan: external: true jobservice-vsan: external: true

It is important to ensure that “volumes:” line starts at the beginning of the line in the .yml config file. The name of the datastore on the next line is then two spaces from the start of line, and the external: directive on the following line is 2+2 (4) spaces from the start of the line. Otherwise the docker-compose command will complain about formatting.

Step 3: docker-compose build and docker-compose up



Now we rebuild an updated/new Project Harbor environment with these new volumes in place, and when that succeeds, we can bring Project Harbor up. I won’t paste the output of the build here as it is rather long.

root@harbor-photon [ /workspace/harbor/Deploy ]# docker-compose build . . . root@harbor-photon [ /workspace/harbor/Deploy ]# docker-compose up -d Creating network "deploy_default" with the default driver Creating deploy_log_1 Creating deploy_ui_1 Creating deploy_registry_1 Creating deploy_mysql_1 Creating deploy_jobservice_1 Creating deploy_proxy_1 root@harbor-photon [ /workspace/harbor/Deploy ]#

Step 4: Verify volumes are in use

To verify that the volumes are indeed being used by Project Harbor services, we can use the “docker inspect” option to look at some of the containers running Project Harbor. In this case, I am looking at the registry container:

root@harbor-photon [ /workspace/harbor/Deploy ]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 33b9f0342ab4 library/nginx:1.9 "nginx -g 'daemon off" 23 minutes ago Up ... 9a857225576d deploy_jobservice "/go/bin/harbor_jobse" 23 minutes ago Up ... d7555469a4e9 deploy_mysql "docker-entrypoint.sh" 23 minutes ago Up ... a315d0bffbf6 library/registry:2.4.0 "/bin/registry serve " 23 minutes ago Up ... 0ad826b8a11d deploy_ui "/go/bin/harbor_ui" 23 minutes ago Up ... d66d1cf81172 deploy_log "/bin/sh -c 'cron && " 23 minutes ago Up ... root@harbor-photon [ /workspace/harbor/Deploy ]# docker inspect \ a315d0bffbf6 | grep -A 10 Mounts "Mounts": [ { "Name": "registry-vsan", "Source": "/mnt/vmdk/registry-vsan", "Destination": "/storage", "Driver": "vmdk", "Mode": "rw", "RW": true, "Propagation": "rprivate" }, {

In the above example, you can see the source is our “registry-vsan” volume, and that the driver is “vmdk” which is the docker volume driver for vSphere. Looks good.

Now lets take a look at the volumes that are currently attached to the Photon OS appliance where we are running Project Harbor. We should be able to see the original appliance volumes, and there should now be 3 additional VSAN volumes used by Project Harbor. We can also verify the policy associated with them, which should be FTT=1 (RAID-1 mirror). This is another sure sign that containers running in this VM are using the volumes.