We then deploy to production.

Wait…​ what about the Testing Environment™? What about the Integration Environment™? What about Jenkins? Well, Go’s tests run in sub-second time so we just run gofmt , go test , go test -cover automatically every time we save our file. More importantly, though, a micro-service approach means our changes are isolated from the rest of the system. Our code’s effects on the total system are well understood. Our code can be deployed independently. Static compilation means we are confident the program has everything it needs. Wrapping our service in a Docker image means the environment is completely controlled. The lack of mystery of the whole process may leave some people a bit bored.

The actual deploy to our production cluster takes the most time. But only because we canary deploy over a few minutes to give us the ability to roll back if needed. We use Fleet on CoreOS, so deploying a service to one of our cluster machines is as simple as restarting the service.

fleetctl stop ic-api-gtin@1.service && fleetctl start ic-api-gtin@1.service

When starting, our service automatically pulls down the latest Docker image and starts it up.

ic-api-gtin@.service

[Unit] Description=ic-api-gtin daemon [Service] Type=simple Restart=always RestartSec=30 PermissionsStartOnly=true ReadOnlyDirectories=/etc EnvironmentFile=/etc/environment Environment="RELEASE=latest" TimeoutStartSec=5m ExecStartPre=-/usr/bin/docker kill ic-api-gtin-%i ExecStartPre=-/usr/bin/docker rm ic-api-gtin-%i ExecStartPre=/usr/bin/docker pull nordstrom/ic-api-gtin:${RELEASE} ExecStart=/usr/bin/docker run \ --name ic-api-gtin-%i \ --hostname ic-api-gtin-%i.cluster.local \ --publish 19111:80 \ nordstrom/ic-api-gtin:${RELEASE}

Since we started this service on one of five nodes behind a round-robin load balancer, a fraction (20%) of our users will start receiving responses from our new service immediately. We can also send a request to a specific node to make sure it looks good. If anything looks bad either in sniff-testing, on the metrics dashboard, or in the logs, we can easily drop the node out of rotation and make any adjustments.

In this case, when looking at our canary, we decided it would be a good time to introduce a related feature. We spent a couple more minutes repeating the above steps again for the added feature–we had the time.

Everything looks good? We just roll through our other load-balanced nodes.