Continuous Delivery of a Microservice Architecture using Concourse.ci, Cloud Foundry and Artifactory

Oliver Wolf, Currently working on MonsterWriter - The most enjoyable way to write a thesis, paper, article or blog post.

Table of Content

Introduction

This tutorial takes a simple microservice architecture and explains how to setup a concourse pipeline in order to test and deploy single microservices independently without affecting the overall microservice system. Cloud Foundry will be used as a platform to which the microservices are deployed to.

Along the way all basic concourse.ci concepts are explained.

We'll use one git repository for each microservice!!!

The goal of the concourse pipeline - which is build during this tutorial - is to automatically trigger and execute the following steps whenever a developer pushes a change to a git repository:

The pipeline pulls the code base of the changed microservice. Run unit tests In case the app/microservice is written in a programming language which requires a compilation of the source code, the pipeline will compile the source code. Next, the pipeline will deploy the change automatically to the test environment. At the time before the pipeline deployed the change to the test environment both existing environments test and production consisted of the same microservice versions. So that we can ensure the deploy will also work in the production environment. Run smoke tests (kind of integration tests) to test whether the change don't break other microservices. Once the smoke tests in the test environment succeeded (or failed) the pipeline should send out a message via email (Slack can also be used instead of email). The deployment into the production system must be triggered manually via the concourse.ci web interface but is then deployed to the production system automatically.

The steps above describe a common pattern on how to build a continuous delivery pipeline but you should discuss within your team which steps are required for each specific project. For example, there could be more then only one test environment a change must pass before it's deployed to production.

I've prepared a simple microservice architecture that can be used during this tutorial. It's an architecture consisting of two services. A customer service written in Java and an order service written in node.js. Links to the repositories are provided later.

Note that I'm using "concourse" and "concourse.ci" as synonyms.

Prerequisites

To follow this tutorial a few things are required:

A GitHub account because you have to fork the repositories in order to push changes.

because you have to fork the repositories in order to push changes. A Cloud Foundry account with the Space Developer role in two application spaces. One space will be used to deploy the testing environment and the other one for the the production environment. You can use public Cloud Foundry targets like anynines or run.pivotal.io. Alternatively you can deploy your own Cloud Foundry but this requires some more things to be in place, so I recommend to use a public provider. Some of them have free/trial plans.

with the Space Developer role in two application spaces. One space will be used to deploy the testing environment and the other one for the the production environment. You can use public Cloud Foundry targets like anynines or run.pivotal.io. Alternatively you can deploy your own Cloud Foundry but this requires some more things to be in place, so I recommend to use a public provider. Some of them have free/trial plans. You should be able to create a PostgreSQL service instance in each Cloud Foundry space . MySQL should also work but has not been tested. If this is not possible there is also an alternative described that stores data to the ephemeral disk of the application containers. This solution can be used to follow the tutorial but it's not usable in real live.

. MySQL should also work but has not been tested. If this is not possible there is also an alternative described that stores data to the ephemeral disk of the application containers. This solution can be used to follow the tutorial but it's not usable in real live. A concourse.ci server . For testing reasons, you can setup a concourse.ci server on your local machine but it's not intended to be a durable solution because the pipeline then doesn't deploy anything when you shutdown your laptop. The installation of concourse on your local machine requires vagrant and virtualbox and will be explaind in the next step of this tutorial. Alternatively to setting up concourse with vagrant you can also use docker compose to deploy a concourse server.

. For testing reasons, you can setup a concourse.ci server on your local machine but it's not intended to be a durable solution because the pipeline then doesn't deploy anything when you shutdown your laptop. The installation of concourse on your local machine requires vagrant and virtualbox and will be explaind in the next step of this tutorial. Alternatively to setting up concourse with vagrant you can also use docker compose to deploy a concourse server. A JFrog Artifactory server. For testing purposes we are setting up an Artifactory server on the local machine in the relevant section of this tutorial.

server. For testing purposes we are setting up an Artifactory server on the local machine in the relevant section of this tutorial. A free Docker Hub Account

A docker engine running locally to build and upload your own docker images.

Setup a concourse.ci Server with Vagrant

To get started fast you'll setup a concourse server on your local machine.

In this step we are using vagrant to get a concourse.ci server up and running. This means you have to install vagrant and virtualbox before you can continue.

Once you've installed vagrant just execute:

Change into an empty directory on your disc because the commands will create some files.

$ cd [an empty directory] $ vagrant box add concourse/lite --box-version 2.5.0

$ vagrant init concourse/lite

$ vagrant up



When vagrant up doesn't start and repeatedly prints the following warning message:

default: Warning: Connection timeout. Retrying...

Open the Vagrantfile - which has been created by the vagrant init command - and insert the following lines (lines 10-13):

Insert lines 10 - 13 into the Vagrantfile when you see the error above.

1 # -*- mode: ruby -*- 2 # vi: set ft=ruby : 3 4 # All Vagrant configuration is done below. The "2" in Vagrant.configure 5 # configures the configuration version (we support older styles for 6 # backwards compatibility). Please don't change it unless you know what 7 # you're doing. 8 Vagrant.configure(2) do |config| 9 10 config.vm.provider :virtualbox do |v| 11 v.customize ["modifyvm", :id, "--cableconnected1", "on"] 12 end 13 14 # The most common configuration options are documented and commented below.

After changing the Vagrantfile, run:

$ vagrant halt

$ vagrant up

Another issue you could stumble upon is the following one:

Per default vagrant up checks whether there is a new version of the concourse/lite box. If this is the case and the new available version has a suffix like "-rc.33" you'll see this error.

$ vagrant up

Bringing machine 'default' up with 'virtualbox' provider...

==> default: Importing base box 'concourse/lite'...

==> default: Matching MAC address for NAT networking...

==> default: Checking if box 'concourse/lite' is up to date...

A version of the box you're loading is formatted in a way that

Vagrant cannot parse: '2.5.1-rc.33'. Please reformat the version

to be properly formatted. It should be in the format of

X.Y.Z.



In this case open the Vagrant file and ensure the following line is present in that file and it is not a comment:

config.vm.box_check_update = false

When everything works you should see vagrant up prints the following output:

The output of vagrant up when everything works.

$ vagrant up

Bringing machine 'default' up with 'virtualbox' provider...

==> default: Setting the name of the VM: example-concourse_default_1482415428978_24700

==> default: Clearing any previously set network interfaces...

==> default: Preparing network interfaces based on configuration...

default: Adapter 1: nat

default: Adapter 2: hostonly

==> default: Forwarding ports...

default: 22 => 2222 (adapter 1)

==> default: Running 'pre-boot' VM customizations...

==> default: Booting VM...

==> default: Waiting for machine to boot. This may take a few minutes...

default: SSH address: 127.0.0.1:2222

default: SSH username: vagrant

default: SSH auth method: private key

default: Warning: Connection timeout. Retrying...

default:

default: Vagrant insecure key detected. Vagrant will automatically replace

default: this with a newly generated keypair for better security.

default:

default: Inserting generated public key within guest...

default: Removing insecure key from the guest if it's present...

default: Key inserted! Disconnecting and reconnecting using new SSH key...

==> default: Machine booted and ready!

==> default: Checking for guest additions in VM...

default: The guest additions on this VM do not match the installed version of

default: VirtualBox! In most cases this is fine, but in rare cases it can

default: prevent things such as shared folders from working properly. If you see

default: shared folder errors, please make sure the guest additions within the

default: virtual machine match the version of VirtualBox you have installed on

default: your host and reload your VM.

default:

default: Guest Additions Version: 5.1.8

default: VirtualBox Version: 5.0

==> default: Setting hostname...

==> default: Configuring and enabling network interfaces...

==> default: Mounting shared folders...

default: /vagrant => /private/tmp/example-concourse



You now should be able to open http://192.168.100.4:8080/ in your browser.

Authentication is disabled in the local setup. It's not required to provide a username and password. You should be immediately logged in and see the following screen:

Another Hint: Increase Memory settings for the Vagrant VM

On my Mac book I had the issue that everything gets really slow when testing and building the java application (see section: " Create concourse job to build the UAA and run the unit tests"). For this purpose, I increased the memory for the Vagrant VM to 4GB. You can do this by adding the following line to the Vagrant file:

Insert line 12 to use a bigger Vagrant VM

1 # -*- mode: ruby -*- 2 # vi: set ft=ruby : 3 4 # All Vagrant configuration is done below. The "2" in Vagrant.configure 5 # configures the configuration version (we support older styles for 6 # backwards compatibility). Please don't change it unless you know what 7 # you're doing. 8 Vagrant.configure(2) do |config| 9 10 config.vm.provider :virtualbox do |v| 11 v.customize ["modifyvm", :id, "--cableconnected1", "on"] 12 v.memory = 4096 13 end 14 15 # The most common configuration options are documented and commented below.

After changing the Vagrantfile, run again:

$ vagrant halt

$ vagrant up

Install the fly CLI and login using the fly CLI

The concourse web interface is only used for displaying the state of the pipelines and for triggering pipelines manually. All other tasks are performed via the fly CLI. For example, creating new pipelines, deleting pipelines etc. are things that are performed via CLI.

You can download the fly CLI as precompiled binary from github.

To download and install the CLI on OSX for example run the following command:

Installing the fly CLI on Mac OSX

$ curl -L https://github.com/concourse/fly/releases/download/v2.5.1-rc.9/fly_darwin_amd64 -o fly

$ chmod +x fly

$ sudo mv fly /usr/bin/fly

Mind that the download link of the CLI differs for each operation system.

In order to communicate with the concourse.ci server via CLI it's required to let the CLI know where the concourse server runs:

$ fly --target local login --concourse-url http://192.168.100.4:8080

The concourse server and the fly CLI should have the same version. You'll see the following warning after executing fly login if the concourse server version and fly version differ:

WARNING:



fly version (2.5.1) is out of sync with the target (2.5.0). to sync up, run the following:



fly -t local sync



To get rid of this warning just execute fly -t local sync and the fly CLI will upgrade/downgrade itself to the same version as the concourse server version.

Learn the basic concourse.ci concepts and setup a hello world pipeline

To verify that our concourse setup is working correctly let's create a simple pipeline. But first let's have a look what "pipeline as code" means.

Pipeline as Code

Concourse realizes the concept of "pipeline as code"

Concourse.ci is build around the concept of "pipeline as code". This means we don't click in the web interface to create and configure pipelines instead they get described in a YAML file (or as code to be more general).

Once a pipeline is described you can upload the yaml file to the concourse server.

The pipeline as code concept has the following benefits:

You can use a source code management system like git to manage pipelines. So it's easy to collaborate and there is a versioning that allows to trace changes.

Thanks to the source code management system, it's possible to rollback changes on the pipelines very easy using tools you already know, git for example.

It's possible to setup the same pipeline on different concourse.ci servers by just changing the target of the fly CLI.

In concourse.ci it's possible to extract so called "tasks" into separate yaml files which then can be reused within different pipelines.

Where to put the pipeline code?

Best Practice!!! Pipeline code should be stored in the same repository as the application that is deployed by the pipeline.

The best practice to store the yaml files containing the specification of the pipelines is to put them into the same code repository as the source code of the application/microservice.

Since we use one code repository for each microservice the pipeline will also be distributed across different code repositories. In fact - in this tutorial - we'll have multiple concourse pipelines with some shared artefacts.

Each microservice code repository will contain a "ci" directory within the root directory. Inside the ci directory we'll have another directory called "pipelines" where the yaml files are stored which describe the pipelines.

In the end it doesn't matter for concourse where the YAML files are stored but structure described above is often used. Since concourse itself uses concourse for continuous integration you can have a look at the concourse github project for a reference.

First simple pipeline without deploying anything

A concourse pipeline is not required to deploy something. Abstractly spoken a concourse.ci provides mechanisms to observe resources and to orchestrate the execution of bash scripts. Often the execution of the scripts is triggered when concourse observers a change of a specified resource.

Later we'll define GitHub repositories as resources but for the hello world example we don't use a resource at all.

Hint: Don't skip this step because it's a good proof that the concourse installation works correctly. That's what Hello Worlds are good for.

Let's create a very simple pipeline that executes a bash script. The bash script prints hello world to the stdout which we will then see in the browser interface of concourse.

In your terminal: change into an empty working directory and create the " ci " directory within this working directory. Furthermore create the file " hello-world.yml " within the ci directory.

.yml is the right file extension for Y A ML files.

concourse-hello-world $ mkdir ci

concourse-hello-world $ touch ci/hello-world.yml

Open the ci/hello-world.yml in the editor of your choice and insert the following content:

A very simple hello world concourse pipeline specified in YAML.

Mind that indentation matter in YAML.

jobs:

- name: hello_world_job

plan:

- task: hello_world_task

config:

platform: linux

image_resource:

type: docker-image

source: { repository: ubuntu }

run:

path: echo

args:

- "Hello World"



If you're not familiar with YAML the pipeline specification might be a bit hard to read at the first glance. YAML is a superset of JSON with some syntactical extensions. In other words: every valid JSON is a valid YAML but not the other way around.

Probably you are familiar with JSON. To give you a change to get an idea what the dashes and whitespaces mean in the YAML specification the same pipeline is specified in JSON below:

Bad Practice!!! Although it is possible to upload JSON to concourse I don't recommend to specify concourse pipelines in JSON. Pipeline specifications tend to be long and YAML is much readable/maintainable for this purpose.

{

"jobs": [

{

"name": "hello_world_job",

"plan": [

{

"task": "hello_world_task",

"config": {

"platform": "linux",

"image_resource": {

"type": "docker-image",

"source": { "repository": "ubuntu" }

},

"run": {

"path": "echo",

"args": ["Hello World"]

}

}

}

]

}

]

}



Before diving into theory and exploring what the statements in the pipeline definition mean let's first get it running. To do so it's required to upload the the pipeline specification to the concourse server. For this purpose, execute the following command:

The arguments of this commands are explained in the next section "The Hello Wold pipeline explained"

concourse-hello-world $ fly -t local set-pipeline --pipeline hello-world --config ci/hello-world.yml



If you want to update the pipeline definition on the concourse server you would be edit the ci/hello-world.yml file and execute the same command again.

Every time you change something on the pipeline specification the set-pipeline subcommand will print all differences between the new pipeline specification and the pipeline specification which is currently on the concourse server.

In this example it's the initial upload and the command will print the whole pipeline specification as "added" content. Confirm the changes by typing " y " and enter.

Every time a pipeline specification is uploaded with the set-pipeline subcommand the changes must be confirmed with " y ".

jobs:

job hello_world_job has been added:

name: hello_world_job

plan:

- task: hello_world_task

config:

platform: linux

image_resource:

type: docker-image

source:

repository: ubuntu

run:

path: echo

args:

- Hello World

dir: ""



apply configuration? [yN]: y

Once the set-pipeline subcommand is executed you see the pipeline in the web interface. In case you are using the local vagrant setup of concourse open http://192.168.100.4:8080/teams/main/pipelines/hello-world. You should see the following screen:

So far the pipeline will not be executed by concourse because it's paused. To unpause the pipeline, click to the left most icon in the blue bar. A side navigation appears which lists all pipelines uploaded to this concourse server. Next to the name of the "hello-world" pipeline there is a play symbol you have to click in order to unpause the pipeline.

The term "build" is also explained in the next section "Concourse.ci core concepts"

Once the pipeline is unpaused you can start a so called "build" which will then execute the pipeline. The following animation shows how to unpause the pipeline and how to start a build afterwards.

In the background concourse will download a Docker image so it could take a few minutes until the pipeline execution is finished. As long as a pipeline is running it's displayed in yellow color. When the last pipeline execution has successfully finished its displayed in green color.

Once the pipeline is finished you can click to "hello_world_task" to see the stdout of the hello word task in the pipeline. It should look like this:

Congratulations!!! You've just configured your first concourse.ci pipeline.

Pro Tips

Alternatively, to unpause the pipeline and start a build via the web interface you can also use the fyl CLI:

Unpause and start the pipeline using the fly CLI instead of the web interface

concourse-hello-world $ fly -t local unpause-pipeline --pipeline hello-world

concourse-hello-world $ fly -t local trigger-job --job hello-world/hello_world_job --watch

The next section explains the core concept of concourse.ci and the following section then explains the hello world pipeline in detail.

Concourse.ci Concepts

After the celebration of the successfully configured hello world pipeline this sections explains what we actually did.

The concourse.ci domain model illustrated in UML.

First let's have at the basic concourse.ci concepts used in the last section.

Note that the illustration is intended to show concourse.ci concepts not the concourse.ci architecture. Further it is simplified, in reality it is more abstract.

The above image illustrates the concourse.ci domain model and shows how the concepts relate to each other. All concepts which are relevant for the hello world pipeline are highlighted in blue color .

Pipelines Pipelines are the central concept in concourse.ci. Because continuously integrating, delivering and deploying software is the primary use case of concourse.ci we can think of a pipeline as a description of how software changes flow into the staging/production system. In other words, a pipeline describes the stages (quality gates) a change must pass before it gets released. Although there might be other concourse.ci use cases, this definition of a pipeline is a good mind model to start building the first pipeline. For this tutorial the stages are: Running unit tests Deploy to the test environment Execute smoke tests Deploy to the production environment These four stages will be realized as concourse.ci jobs (see below). The list oft the supported resources can be extended but for this tutorial we are good with the existing resource types. The logic how to find out if a resource changed , to pull and push a specific resource type is encapsulated in a docker image. Hence everyone can provide a new concourse resource type by simply putting three executable scripts into a docker image. See the concourse.ci/ implementing-resources.html for a reference or git-resource for an example. Resources We did not use resources yet in the hello world example but they are crucial to explain what pipelines are. Resources are intended to flow through pipelines whenever they change. A resources can be defined as a input of one or multiple jobs. In concourse.ci there are predefined resource types. Resources are the inputs of jobs. Later in this tutorial we'll define two git resources, one for each microservice. Each of these two git resources will point to a git repository (hosted on GitHub in our example). Concourse knows how to observe a git repository in order to detect whether it changed. This means concourse will poll the git repository periodically and checks whether there are new commits. Once concourse detected a change on a resource it will pull the content of the resource and pass it to the jobs specified in the pipeline definition. For the git resource type for example, concourse will clone the git repository and then it will pass all cloned files/directories to the jobs declared to receive the resource. Besides observing a resource and pulling its content, concourse also provides to possibility to update a resource. This means that we can create new versions of a resource during the delivery process. An example where we'll need to create a new version of a resource is that our pipeline builds an executable jar file out of the java sources in the git repository. The Jar file is represented as a concourse resource which is updated by the concourse pipeline. The resource-type that is used to manage jar files is the artifactory resource type. Jobs Jobs describe the actual work a pipeline does. A Job consists of a build plan. A build plan itself consists of multiple steps. The steps in a build plan can be arranged to run in parallel or one after the other (a combination of parallel and sequentially is also possible). There are three different types of steps that are explained next. Fetch Resource Steps (via "get" keyword) This kind of step tells concourse which resources are required to execute the build plan. Concourse will then provide the latest version of the specified resources in the tasks steps. Tasks Steps A tasks consists of shell command (or a shell script) and a name of a docker image in which the bash script is executed. Tasks are the workhorses in concourse.ci. In our example we'll specify how to run the test suite and compile java code in tasks. A task in concourse can have multiple inputs and multiple outputs (later we will se how we use tasks with inputs and outputs). Update Resource Step (via "put" keyword) When a put step is specified in a build plan, a build will update a resource. In contrast to outputs of tasks the put step will update the resource persistently whereas an output of a task only lives during the execution of a build plan. Outputs of tasks are intended to pass data from one task of a build plan to another task of the same build plan.

Before we put the theory by site let's have a look at the yaml specification of the hello world pipeline and see whether we recognise some of the concepts we just learned (this is done in the "The Hello Wold pipeline explained" section).

After this we'll extend the hello world example to use a resource ("Hello World with Resources" section).

Last but not least we'll deploy the microservices in the following sections.

The Hello Wold pipeline explained

The following pipeline specification is the same we used in the "First simple pipeline without deploying anything" section. In this section we'll match the different parts of the yaml to the concepts introduced in the last section. The image below annotates the pipeline specification with the concept represented in the specific lines.

The same hello world pipeline as in the "First simple pipeline without deploying anything" section.

We see that the hello world pipeline has exactly one job with exactly one task in its build plan.

Nested within the definition of the task you find the specification of the platform the task should run on. To execute a task a concourse.ci setup consists of one or multiple "workers". These are virtual machines on which concourse executes task. You could setup a concourse installation with workers that provide different platforms. Possible are: "linux", "windows" or "darwin" but a default concourse setup only provides Linux workers. You have to specify a platform in every task definition even if there are no other workers then Linux workers in your setup. This is because pipeline definitions are not coupled to specific concourse setup.

Usually docker images are used to describe the inside of the container in which the task is executed.

Next comes the "image_resource" (annotated with "Docker Image" in the image above) within the task definition. The image_resource is easy to use. Understanding what concourse does with that configuration in detail is a bit more complicated. It's important to know that the tasks are not executed on the concourse workers directly. Instead they get executed within a container. Whenever a task should be executed a new container is launched for this purpose. A container image is used to describe the inside of a container (the file structure and programs that are available to the processes started within that container). In the hello world pipeline we are using the "ubuntu" docker image from the official docker hub registry.

The possibility to specify a docker image for each task allows to decouple the task from the worker. So the task is not limited to the tools installed on the worker VM.

By using images, it is furthermore possible that two different tasks use the same dependency in different versions.

Using Docker images, a pipeline developer can install all required dependencies by themselves without asking a operator to install the dependencies on the concourse servers.

For example: You might have two java applications, one requires JRE 1.6 and the other requires JRE 1.7. To execute the unit tests for each of your applications you would create two concourse tasks. By using different Docker images for the tasks it's quite easy to run one task with JRE 1.6 and the other with 1.7 without installing both versions of the JRE on the worker VM itself. You just have to specify a docker image that includes the appropriate JRE version.

Last but not least there is a "run" block nested in the task block. The run block contains the path to a command which should be executed inside the specified container. In the hello world example, we are using a command sitting in a directory specified in the $PATH variable. So we don't need to specify the whole path. The command line arguments which should be passed to the command are specified separately in the "args" array within the run block. In the hello world example, we just pass one command line argument, which is "Hello World".

To execute multiple commands within one task you can use the following pattern:

Not Quite Best Practices I use this for small scripts or to quickly try something out. See section Refactor Pipeline to get an idea on how to manage large scripts properly.

jobs:

- name: hello_world_job

plan:

- task: hello_world_task

config:

platform: linux

image_resource:

type: docker-image

source: { repository: ubuntu }

run:

path: sh

args:

- -exc

- |

whoami

date

echo "Hello World"



In this example a yaml feature (called literal block) is used. Notice the "|" character that is used to introduce such a literal block.

In YAML a literal block is a multi line string. In this example the litteral block contains a shell script consisting of the three commands ' whoami ', ' date ' and ' echo "Hello World" '.

Hello Wold with Resources

Let's extend the hello world pipeline with two resources to get a feeling how to use such concourse.ci resources. We'll extend the hello world pipeline in three steps:

Pro Tip Always make baby steps to the desired result and evaluate each step is working. This saves you a lot of time of troubleshooting and searching syntax errors within your yaml files.

We'll use the concourse git-resource to observe a git repository on GitHub. When there is a new commit the pipeline should fetch the new commit and just list the files within the git repository so that we see the file structure of the git repository in the concourse web interface. Once we finished step one we change the pipeline to deploy the code from the git repository to a Cloud Foundry installation (instead of listing the content of the repository). Per default a concourse pipeline is configured so that changes on a resource (git repo in this example) don't trigger an execution of a pipeline. We'll change this behaviour in the last step of this section and check how this changes the visual representation of the pipeline in the concourse.ci web interface.

Step 1 - Fetch Git Resource

To complete the first step, create an empty yml file (e.g. simple_deploy.yml ) within your ci directory and insert the following content:

A simple pipeline using a git resource.



In the resource block on the top the " app_sources " resource is defined. This identifier is then used across the yml specification to refer to the resource.

resources:

- name: app_sources

type: git

source:

uri: https://github.com/fermayo/hello-world-php



jobs:

- name: simple-deploy

plan:

- get: app_sources

- task: list-repo-content

config:

platform: linux

inputs:

- name: app_sources

image_resource:

type: docker-image

source: { repository: ubuntu }

run:

path: sh

args:

- -exc

- |

ls -R ./app_sources



Changes to the hello world pipeline are bolded. The first block is a resources block where the resources used in the pipeline get declared and configured. The only configuration we provide for the resource is the uri and of course the type of the resource. Further we choose an identifier for the resource so that we can refer to the resource in the jobs and tasks. In this example the resource identifier is " app_sources ".

Have a look at the official concourse.ci git resource for more configuration options. For example, it is also possible to specify an username and password or a certificate if you want to use private repositories.

In the build plan of the job we then say " get: app_sources ". This step will check whether the latest version of the resource is available in the concourse.ci cache and it will fetch the latest version if not. So that the resource is available for subsequent steps.

Not every step necessarily requires access to every resource which is fetched in a job. This is why you further must specifiy that a task requires a specific resource as input. Concourse will then provide a directory inside the task container which contains the resource. This directory is named as the resource identifier ("app_sources" in this example).

When you run a ls command in the task you'll see that there is a app_source directory. When you run "ls ./app_source" you'll see the content of the resource.

To upload the pipeline definition to the concourse.ci server use the following command:

We already used this command at the beginning of this tutorial

concourse-hello-world $ fly -t local set-pipeline --pipeline ci/simple-deploy --config simple_deploy.yml



Open the concourse web interface and go to /teams/main/pipelines/simple-deploy. You can also navigate to a pipeline by clicking on the symbol in the upper left corner. The pipeline looks like this:

We did not yet configure the resource to trigger builds of the dependent jobs when there are new versions of the resource (the broken line between resource and job indicates this). This means we must start the build manually. We already did that in the section "First simple pipeline without deploying anything". Have a look at the animated gif there to unpause the pipeline and start a build of the job or use the following CLI commands instead:

Unpause and start the pipeline using the fly CLI instead of the web interface

concourse-hello-world $ fly -t local unpause-pipeline --pipeline simple-deploy

concourse-hello-world $ fly -t local trigger-job --job simple-deploy/simple-deploy --watch

Once the build completed you should see the following output in the job overview.

Step 2 - Deploy the app to Cloud Foundry

In the first step we used the concourse.ci git resource as "input resource" in this step we are going to use an "output resource". But this time we'll not use an git resource instead we'll use the concourse.ci Cloud Foundry resource to push the application to a Cloud Foundry instance.

In this step we'll modify the pipeline definition form step 1 like this:

Bad Practice!!! Don't write passwords and other secrets directly into the pipeline yml. We'll refactor this later!

resources:

- name: app_sources

type: git

source:

uri: https://github.com/wolfoo2931/concourse-ci-hello-world.git

- name: staging_deployment

type: cf

source:

api: https://api.aws.ie.a9s.eu

username: owolf@specify.io

password: secret-pwd

organization: owolf@specify.io

space: dev

skip_cert_check: false



jobs:

- name: simple-deploy

plan:

- get: app_sources

- put: staging_deployment

params:

manifest: app_sources/manifest.yml



In this example we added another resource to the resources block. The new resource type is "cf" instead of "git". The concourse cf resource is made to deploy applications to Cloud Foundry very easy. To do so it is required to specify the credentials of a Cloud Foundry instance in the resource config.

To finally deploy the application we added the put step to the "simple-deploy" job.

Why do we not need to specify an input for the put step like we had to do for the task to list the directory content? The concourse documenations says:

All artefacts collected during the plan's execution will be available in the working directory.

This means for the put steps we don't need explicitly tell concourse which inputs we need we just get everything.

When you update the pipeline via the fly CLI the pipeline should look like this (the image shows the pipeline after a build succeeded):

Step 3: Trigger Builds Automatically

So far we have to trigger the builds manually even if there are new commits in the git repo. To configure the pipeline so that concourse triggers new builds automatically as soon it detects new commits just add the following line to the pipeline definition (The bold line):

Add trigger: true to the pipeline definition.

resources:

- name: app_sources

type: git

source:

uri: https://github.com/wolfoo2931/concourse-ci-hello-world.git

- name: staging_deployment

type: cf

source:

api: https://api.aws.ie.a9s.eu

username: owolf@specify.io

password: secret-pwd

organization: owolf@specify.io

space: dev

skip_cert_check: false



jobs:

- name: simple-deploy

plan:

- get: app_sources

trigger: true

- put: staging_deployment

params:

manifest: app_sources/manifest.yml



Once the pipeline is updated using the fly command concourse displays the pipeline with a solid line between the resource and the job (see image below).

With the current setup you will probably run into an issue: When concourse triggers a build and another build is still running at that time, concourse will start both build and they run in parallel.

When two builds try to push the same application at the same time one build probably fails.

To prevent builds from running in parallel you can insert the following option into the pipeline definition:

Add serial: true to the pipeline definition to prevent builds running in parallel and break when both try to deploy the same application.

resources:

- name: app_sources

type: git

source:

uri: https://github.com/wolfoo2931/concourse-ci-hello-world.git

- name: staging_deployment

type: cf

source:

api: https://api.aws.ie.a9s.eu

username: owolf@specify.io

password: secret-pwd

organization: owolf@specify.io

space: dev

skip_cert_check: false



jobs:

- name: simple-deploy

serial: true

plan:

- get: app_sources

trigger: true

- put: staging_deployment

params:

manifest: app_sources/manifest.yml



There are still a few thing missing here which would prevent me from using this pipeline:

Unit tests are not executed

Credentials are in the pipeline definition

The application encounters downtimes when concourse performs a deployment

Deploy in test environment first and then to production

Refactoring pipelines regarding best practices

These things will be fixed in the next sections.

Pipe the first service to production (Java App)

The UAA is our open source guinea pig for this section. It's a useful general purpose component written in Java.

In this section we are going to specify a pipeline which tests, builds and deploys a Java application/service. The application we are going to use is the UAA.

UAA stands for User Account and Authentication and is a service that was developed as a component of Cloud Foundry. We are not going to deploy Cloud Foundry in this tutorial. The good thing about the UAA service is that this micorservice is a general purpose service without any Cloud Foundry specific logic. So we are going to reuse this service in our own microservice architecture. The UAA implements two popular protocols: OAuth 2.0 and SCIM.

At least the OAuth 2.0 protocol is very common and often used in microservices to realize Human to Machine and Machine to Machine authentification/authorization.

You can find the source code of the UAA on GitHub.

Again we are doing small steps towards the desired pipeline and validate each step whether it works. For this pipeline we'll do the following:

The roadmap for this section. We'll add integration tests in the next section when we setup the pipeline for the second service.

Create the code repo, setup the file structure and insert pipeline code to checkout the UAA Git repository. Create a Docker image to provide all dependencies required to run the UAA unit tests. Specify a concourse job to run the UAA unit tests and build a war file. Store the war file to Artifactory. Create a PostgreSQL instance in the test and production environment. Specify a concourse job to deploy the UAA to the test environment. Send an email once a new version of the UAA is deployed to the test environment. Specify a concourse job to deploy the UAA to the production environment.

Step 1 - Setup Git Repo for the pipeline and create files

The official best practice to store the pipeline code is to store it in the same source code repository as the actual application code. In our example this means we have to store it in the UAA GitHub repository. But since we are not the owner of this repository we'll don't do this. Instead we create a dedicated Git repository to store the UAA pipeline.

Create a new empty directory and setup the following file structure:

$ mkdir concourse-ci-tutorial

$ cd concourse-ci-tutorial



concourse-ci-tutorial $ mkdir -p ci/pipelines

concourse-ci-tutorial $ touch ci/pipelines/uaa.yml



Now create a Git repository out of it and commit the changes:

concourse-ci-tutorial $ git init

concourse-ci-tutorial $ git add .

concourse-ci-tutorial $ git commit -m "initial file structure"



Step 2 - Create and Publish a Docker Image

You could simply use the link to the public Docker image I prepared and uploaded to my docker hub account but since this is a recurring step in building a pipeline I will not keep this details from you.

Using a free Docker hub account you can't upload private Docker images. This is ok for the tutorial.

If you've not already done it install the Docker engine locally and signup for a free Docker hub account.

We'll use a Dockerfile to describe the Docker image. Once we've described the Docker image in a Dockerfile everyone can create the image out of this Dockerfile. Another benefit of dockerfiles is that we can also put it into a source control system (git in our example).

Create an empty Dockerfile at:

concourse-ci-tutorial $ mkdir -p ci/dockerfiles/uaa

concourse-ci-tutorial $ touch ci/dockerfiles/uaa/Dockerfile

Insert the following content to the Dockerfile:

FROM java:8-jdk

To be fair, this is a very simple Dockerfile and we could have used the "java:8-jdk" directly since we don't add any further dependencies to the image. If you have some other dependencies required to test and build the Docker image you would specify them in this Docker file.

For now, we just specify that our Docker image inherits from the "java:8-jdk" image.

Let's build an image out of the Dockerfile by running the following command:

In the following command you must replace "wolfoliver" with your Docker Hub account.

Replace "wolfoliver" with your Docker Hub account.

concourse-ci-tutorial $ docker build -t wolfoliver/uaa ci/dockerfiles/uaa/Dockerfile

So far the docker image is only available on your local docker engine. The concourse pipeline won't be able to download the image yet from Docker hub. Using docker images shows all your local images.

List all docker images stored at you local docker engine. Because the Docker engine needs the base image to build the "uaa" image the base image gets downloaded when you build the image.

concourse-ci-tutorial $ docker images

REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE

java 8-jdk 877969993f98 13 days ago 643.2 MB

wolfoliver/uaa latest 877969993f98 13 days ago 643.2 MB



To push the Docker image to the official docker hub where everyone can download the image from everywhere run the following command:

In the following command you must replace "wolfoliver" with your Docker Hub account.

docker login asks for your Docker hub credentials.

concourse-ci-tutorial $ docker login

concourse-ci-tutorial $ docker push wolfoliver/uaa

When you look at your Docker hub account in the browser interface you should see the new image.

Step 3 - Create concourse job to build the UAA and run the unit tests

In the following pipeline definition, you must replace "wolfoliver" with your Docker Hub account.

Open the ci/pipelines/uaa.yml file and insert the following pipeline definition:

Replace "wolfoliver" with your Docker hub account!

resources:

- name: uaa_sources

type: git

source:

uri: https://github.com/cloudfoundry/uaa.git

tag_filter: '3.6.*'



jobs:

- name: build

plan:

- get: uaa_sources

trigger: true

- task: build

config:

platform: linux

inputs:

- name: uaa_sources

outputs:

- name: uaa_war

image_resource:

type: docker-image

source: { repository: wolfoliver/uaa }

run:

path: sh

args:

- -exc

- |

export TERM=dumb

cd uaa_sources

./gradlew test

./gradlew :cloudfoundry-identity-uaa:war

mv uaa/build/libs/cloudfoundry-identity-uaa-*.war ../uaa_war

Before we start the pipeline we'll shortly discuss the bolded lines.

If you don't specify a tag_filter every commit will be deployed, even if the commit doesn't has a tag at all.



You can also specify '3.*.*' to deploy every version that begins with a "3".

At the resources block we specify the UAA directory as the source where the code should be fetched from. Another option we specify for the git resource is tag_filter: '3.6.*' . This tells concourse that only commits tagged with a tag matching the specified glob will trigger the pipeline. The pipeline will only deploy UAA versions which have a git tag like '3.6.0', '3.6.1', '3.6.2', '3.6.3', '3.6.4', ..., '3.6.12', ... and so on. But also version like '3.6.4.rc1' will be deployed.

The next configuration I want to talk about is the outputs config:

outputs:

- name: uaa_war



That's how we move content from one step to the other. Without doing so every file created in a step will be deleted when the task is finished unless the tasks moves the file to specified output directory. In this pipeline we are doing this in the last line:

mv uaa/build/libs/cloudfoundry-identity-uaa-*.war ../uaa_war

The other commands in the run script run the unit tests and build the war file (cloudfoundry-identity-uaa-*.war).

Now, let's see whether it runs. Upload the pipeline code with:

concourse-ci-tutorial $ fly -t local set-pipeline --pipeline uaa --config ci/pipelines/uaa.yml

Unpause and start a build via fly CLI or browser view.

Running the test might take about 40 min when you running concourse on your local machine! If that's to long or the test suite fails for any reasons you can emulate the test run like this:

In order to complete the tutorial, I suggest to don't waste time with troubleshooting the test suite!

......

path: sh

args:

- -exc

- |

export TERM=dumb

cd uaa_sources

echo "Unit tests are scipped to go faster :)"

./gradlew :cloudfoundry-identity-uaa:war

mv uaa/build/libs/cloudfoundry-identity-uaa-*.war ../uaa_war

Of course you should not skip the unit tests for real world projects! This is just to be able to process with the tutorial without spending to much time with troubleshooting.

Step 4a: Setup local Artifactory server

If you don't have an Account to a production Artifactory server you can setup one by your own on your local machine in order to follow this tutorial. This sections briefly explains how to do it.

In this step we'll upload the created war file to Artifactory. Artifactory is a wildly used open source system to software packages. We'll store the created war file to Artifactory to download them in later jobs and deploy them to the test and production environment. In this way we don't have to build the war file for each environment again. Instead we just download the war file from Artifactory.

If you have running your concourse setup locally with vagrant, you also can setup Artifactory locally. There is a Docker image to get it running quickly. Start your Docker engine and execute:

concourse-ci-tutorial $ docker pull docker.bintray.io/jfrog/artifactory-oss

concourse-ci-tutorial $ docker run -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss

To check whether Artifactory runs find out the IP address of the VM where your docker engine runs on. It's probably not localhost because on Mac OS and Windows the Docker engine runs in a VM (the VM runs on your local machine).

To find out the Docker engine IP address run:

Find out the IP address where your Artifactory server runs.

concourse-ci-tutorial $ docker-machine ip default

192.168.99.100

If the docker machine IP is 192.168.99.100 you can check whether Artifactory is running by opening http://192.168.99.100:8081 in your browser.

The default Artifactory admin credentials are

user: admin

password: password

If you need more help running the Artifactory server have a look at the official documentation.

Step 4b: Store the war file to Artifactory

Before the pipeline can store the war files to Artifactory we have to configure a repository in Artifactory. To do so open the Artifactory web interface, login and perform the following steps:

Screencast below These steps are also shown in the video below.

Click on "Admin" on the left side Click on "Local" below the "Repositories" section Click on "New" in the upper right corner Select package type "Generic" Enter Repository key: "war-files" Click to "Save & Finish"

How to configure a new local, generic repositroy in JFrog Artifactory.

After Artifactory is running and configured we can extend the pipeline in ci/pipelines/uaa.yml . See the required changes below to get the pipeline to upload the war file to Artifactory.

Extended pipeline to store war files using the Artifactory concourse.ci resource.



Note: You must replace 192.168.99.100:8081 with the Address of your Artifactory server. The credentials must be replaced as well.

resource_types:

type: docker-image

source:

repository: pivotalservices/artifactory-resource



resources:

type: git

source:

uri: https://github.com/cloudfoundry/uaa.git

tag_filter: '3.6.*'

- name: uaa-build

type: artifactory

source:

endpoint: http://192.168.99.100:8081/artifactory

repository: "/war-files/uaa"

regex: "cloudfoundry-identity-uaa-(? .*).war"

username: admin

password: password

skip_ssl_verification: true



jobs:

plan:

- get: uaa_sources

trigger: true

- task: build

config:

platform: linux

inputs:

- name: uaa_sources

outputs:

- name: uaa_war

image_resource:

type: docker-image

source: { repository: wolfoliver/uaa }

run:

path: sh

args:

- -exc

- |

export TERM=dumb

cd uaa_sources

./gradlew :cloudfoundry-identity-uaa:war

mv uaa/build/libs/cloudfoundry-identity-uaa-*.war ../uaa_war

- put: uaa-build

params:

file: uaa_war/cloudfoundry-identity-uaa-*.war



Recap: We already know that custom concourse resources are realized as Docker image.

At the top we see the whole new section resource_types . We need to specify this because Artifactory can't be used as a "build in resource" in concourse ci. The artifactory-resource is an external resource and we have to define where the logic of this resource type is located. With logic is meant the code that describes how to upload artefacts to Artifactory, how to check whether there are new versions of an artefact available and how to download specific artefact versions from Artifactory. Concourse requires this logic to be encapsulated in a Docker image. Thats why we specify a Docker image here (The Docker image is pivotalservices/artifactory-resource ).

Next, there is a new entry in the resources section. Here we make use of the Artifactory resource and declare a concreate resource (a concreate Artifactory repositroy in this case) and give it a name uaa-build . We also specify the Artifactory endpoint and credentials here. Have a look at the offical artifactory concourse.ci resource for more configuration options.

At the end of the pipeline definition we add a "put" step to the build plan of the single job we have. There we specify which file should be uploaded to the Artifactory repository.

When you update the pipeline using the fly set-pipeline command, the pipeline should look like this:

Step 5: Create RDBMS instances

To run the UAA you need a relational database (PostgreSQL, MySQL) where the UAA can store its data.

For cheap testing purposes You don't necessarily need to create two database servers to follow this tutorial.

If you don't have the possibility to create two database instance you can also run the UAA without a persistent database service and instead use HSQLDB. HSQLDB is a lightweight database that can be integrated into other Java applications. With such a setup the UAA will write it's data to the file system where the UAA server is running. But in this case the setup is not very usable in real live for the following reasons:

You can't scale the UAA to more then one instance. This is because (by using Cloud Foundry or a PaaS in general) each instance gets it's own file system and it is not synced.

All data is lost when the UAA instance gets restarted. This is because the file system of the application instances is ephemeral when using a PaaS. Read about 12facotors applications (especially the section about stateless apps ) to understand why this is handled this way.

Anyway, when you decide to go with HSQLDB you still have to ensure that the right Cloud Foundry application spaces are available. This is described next.

In order to create a database instance, you have to login to your Cloud Foundry provide of choice. To do so you can use the cf CLI:

In this example we are using anynines as Cloud Foundry provider. Replace https://api. aws.ie.a9s.eu if you want to use another one.

concourse-ci-tutorial $ cf login -a https://api.aws.ie.a9s.eu

To manage our two environment test and production we use two different Cloud Foundry application spaces. To see which spaces are available run:

concourse-ci-tutorial $ cf spaces

Getting spaces in org owolf_anynines_com as owolf@anynines.com...



name

dev

production



In the current Cloud Foundry organization two spaces are already created ( dev and production ). If you need or want to create another space run:

concourse-ci-tutorial $ cf create-space test

Most Cloud Foundry providers have either a MySQL or PostgreSQL service in their marketplace. To see the service offering run:

concourse-ci-tutorial $ cf marketplace

Getting services from marketplace in org owolf_anynines_com / space dev as owolf@specify.io...

OK



service plans description

....

a9s-postgresql postgresql-single-small,.... anynines PostgreSQL service for Cloud Foundry

....



The output above only shows the relevant section of the anynines marketplace. We need to create a PostgreSQL/MySQL instance in both environments (Cloud Foundry spaces). To change the Cloud Foundry space and create a PostgreSQL instance run:

Switch to the test and production space and create a PostgreSQL in each space.

concourse-ci-tutorial $ cf target -s test

concourse-ci-tutorial $ cf create-service a9s-postgresql postgresql-single-small uaadb

concourse-ci-tutorial $ cf target -s production

concourse-ci-tutorial $ cf create-service a9s-postgresql postgresql-single-small uaadb



Step 6: Specify a concourse job to deploy the UAA to the test environment

In this step we are going to deploy the UAA to the test environment. To do so extend the uaa.yml with the bolded sections:

Bad Practice!!! Don't write passwords and other secrets directly into the pipeline yml. We'll refactor this later!

resource_types:

- name: artifactory

type: docker-image

source:

repository: pivotalservices/artifactory-resource



resources:

- name: uaa_sources

type: git

source:

uri: https://github.com/cloudfoundry/uaa.git

tag_filter: '3.6.*'

- name: uaa-build

type: artifactory

source:

endpoint: http://192.168.99.100:8081/artifactory

repository: "/war-files/uaa"

regex: "cloudfoundry-identity-uaa-(?<version>.*).war"

username: admin

password: password

skip_ssl_verification: true

- name: test_deployment

type: cf

source:

api: https://api.aws.ie.a9s.eu

username: owolf@specify.io

password: secret

organization: owolf_specify_io

space: test

skip_cert_check: false



jobs:

- name: build

plan:

- get: uaa_sources

trigger: true

- task: build

config:

platform: linux

inputs:

- name: uaa_sources

outputs:

- name: uaa_war

image_resource:

type: docker-image

source: { repository: wolfoliver/uaa }

run:

path: sh

args:

- -exc

- |

export TERM=dumb

cd uaa_sources

./gradlew :cloudfoundry-identity-uaa:war

mv uaa/build/libs/cloudfoundry-identity-uaa-*.war ../uaa_war

- put: uaa-build

params:

file: uaa_war/cloudfoundry-identity-uaa-*.war



- name: deploy-to-test

plan:

- get: uaa-build

passed: ['build']

trigger: true

- task: add-manifest-to-uaa-build

config:

platform: linux

inputs:

- name: uaa-build

outputs:

- name: uaa-build-with-manifest

image_resource:

type: docker-image

source: { repository: wolfoliver/uaa }

run:

path: sh

args:

- -exc

- |

cp uaa-build/* uaa-build-with-manifest

export WAR_PATH=`cd uaa-build-with-manifest && ls cloudfoundry-identity-uaa-*.war`

cat <<EOT >> uaa-build-with-manifest/manifest.yml

applications:

- name: uaa

memory: 512M

path: ${WAR_PATH}

host: test-uaa

services:

- uaadb

env:

JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '[enabled: true]'

JBP_CONFIG_TOMCAT: '{tomcat: { version: 7.0.+ }}'

SPRING_PROFILES_ACTIVE: postgresql,default

UAA_URL: https://test-uaa.aws.ie.a9sapp.eu

LOGIN_URL: https://test-uaa.aws.ie.a9sapp.eu

EOT

- put: test_deployment

params:

manifest: uaa-build-with-manifest/manifest.yml



Mind that you have to replace the following configuration according to your Cloud Foundry provider:

Replace the credentials specified in the source section for the test_deployment resource.

UAA_URL and LOGIN_URL in the add-manifest-to-uaa-build task of the deploy-to-test job must be replaced with the right URL.

Once you've uploaded the pipeline definition to concourse.ci it should look like this:

In case you don't have the possibility to provision a MySQL or PostgreSQL instance on your Cloud Foundry instance specify the following in order to use an internal HSQLDB:

If you don't want to use a real database because of costs reasons remove the services array from the deployment manifest and set JBP CONFIG SPRING AUTO RECONFIGURATION to false.

........

cat <<EOT >> uaa-build-with-manifest/manifest.yml

applications:

- name: uaa

memory: 512M

path: ${WAR_PATH}

host: test-uaa

env:

JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '[enabled: false]'

JBP_CONFIG_TOMCAT: '{tomcat: { version: 7.0.+ }}'

SPRING_PROFILES_ACTIVE: default

UAA_URL: https://test-uaa.aws.ie.a9sapp.eu

LOGIN_URL: https://test-uaa.aws.ie.a9sapp.eu

EOT

........

Step 7: Send an email once a new version of the UAA is deployed to the test environment

In this section we add some code to send an email when a build fails or succeeds. Like in the other steps you can see the whole pipeline with the required changes in bold.

Bad Practice!!! Here we are using two almost similar tasks to create the content of the email. We'll refactor this later.

resource_types:

- name: artifactory

type: docker-image

source:

repository: pivotalservices/artifactory-resource

- name: email

type: docker-image

source:

repository: pcfseceng/email-resource



resources:

- name: uaa_sources

type: git

source:

uri: https://github.com/cloudfoundry/uaa.git

tag_filter: '3.6.*'

- name: uaa-build

type: artifactory

source:

endpoint: http://192.168.99.100:8081/artifactory

repository: "/war-files/uaa"

regex: "cloudfoundry-identity-uaa-(?<version>.*).war"

username: admin

password: password

skip_ssl_verification: true

- name: test_deployment

type: cf

source:

api: https://api.aws.ie.a9s.eu

username: owolf@anynines.com

password: secret

organization: owolf_anynines_com

space: test

skip_cert_check: false

- name: notification

type: email

source:

smtp:

host: smtp.gmail.com

port: "587"

username: owolf@specify.io

password: secret

from: ci@specify.io

to: [ "devteam@specify.io" ]



jobs:

- name: build

plan:

- get: uaa_sources

trigger: true

- task: build

config:

platform: linux

inputs:

- name: uaa_sources

outputs:

- name: uaa_war

image_resource:

type: docker-image

source: { repository: wolfoliver/uaa }

run:

path: sh

args:

- -exc

- |

export TERM=dumb

cd uaa_sources

./gradlew :cloudfoundry-identity-uaa:war

mv uaa/build/libs/cloudfoundry-identity-uaa-*.war ../uaa_war

on_success:

do:

- task: create-notification-content

config:

platform: linux

inputs:

- name: uaa_sources

outputs:

- name: notification-content

image_resource:

type: docker-image

source: { repository: pallet/git-client }

run:

path: sh

args:

- -exc

- |

LAST_COMMIT_HASH=$(cd uaa_sources && git log -1 | grep commit | cut -d' ' -f2)

LAST_COMMIT_DETAILS=$(cd uaa_sources && git log -1 --name-status)

echo "UAA Build Successful ${LAST_COMMIT_HASH}" >> notification-content/notification_subject.txt

echo "UAA Build Successful



${LAST_COMMIT_DETAILS}" >> notification-content/notification_body.txt

- put: notification

params:

subject: notification-content/notification_subject.txt

body: notification-content/notification_body.txt

on_failure:

do:

- task: create-notification-content

config:

platform: linux

inputs:

- name: uaa_sources

outputs:

- name: notification-content

image_resource:

type: docker-image

source: { repository: pallet/git-client }

run:

path: sh

args:

- -exc

- |

LAST_COMMIT_HASH=$(cd uaa_sources && git log -1 | grep commit | cut -d' ' -f2)

LAST_COMMIT_DETAILS=$(cd uaa_sources && git log -1 --name-status)

echo "UAA Build FAILED ${LAST_COMMIT_HASH}" >> notification-content/notification_subject.txt

echo "UAA Build FAILED



${LAST_COMMIT_DETAILS}" >> notification-content/notification_body.txt

- put: notification

params:

subject: notification-content/notification_subject.txt

body: notification-content/notification_body.txt



- put: uaa-build

params:

file: uaa_war/cloudfoundry-identity-uaa-*.war



- name: deploy-to-test

plan:

- get: uaa-build

passed: ['build']

trigger: true

- task: add-manifest-to-uaa-build

config:

platform: linux

inputs:

- name: uaa-build

outputs:

- name: uaa-build-with-manifest

image_resource:

type: docker-image

source: { repository: wolfoliver/uaa }

run:

path: sh

args:

- -exc

- |

cp uaa-build/* uaa-build-with-manifest

export WAR_PATH=`cd uaa-build-with-manifest && ls cloudfoundry-identity-uaa-*.war`

cat <<EOT >> uaa-build-with-manifest/manifest.yml

applications:

- name: uaa

memory: 512M

path: ${WAR_PATH}

host: test-uaa

services:

- uaadb

env:

JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '[enabled: true]'

JBP_CONFIG_TOMCAT: '{tomcat: { version: 7.0.+ }}'

SPRING_PROFILES_ACTIVE: postgresql,default

UAA_URL: https://test-uaa.aws.ie.a9sapp.eu

LOGIN_URL: https://test-uaa.aws.ie.a9sapp.eu

EOT

- put: test_deployment

params:

manifest: uaa-build-with-manifest/manifest.yml



Like the Artifactory resource the concourse.ci email resource is also a none build in resource and so we have to specify a new resource type. And like in the resource type for the Artifactory resource we have to specify where the docker image is located which contains the logic to send emails.

In the resource section SMTP settings must be provided. In the pipeline example I'm using my gmail account to send emails.

Note: When you are using gmail you must turn on access for less secure app in the gmail configurations.

The on_success , on_failure and do keywords are new statements which we did not use before.

Because we want to send out different emails when building the UAA *.war file step fails or succeed we have to specify two different branches in the build plan. Using the on_success statement in a task we can execute another step only when the task succeed (the same for on_failure ).

In our case we want to execute two steps when the build task succeed/failed. The first step is a task to create the email content, the second step is a put to send out the email. To be able to specify two steps in the on_success / on_failure hook we must use the do statement.

Step 8: Specify a concourse job to deploy the UAA to the production environment

In this final step of this section we want to deploy the UAA application to the production environment. But this time not automatically like we do in the test environment. To do this extend the pipeline with the following code:

Code that needs to be added so that the pipeline deploys to production.



Because the pipeline definition gets longer now you only see the sections that needs to be added and not the complete pipeline.

...



resources:



...



- name: production_deployment

type: cf

source:

api: https://api.aws.ie.a9s.eu

username: owolf@anynines.com

password: secret

organization: owolf_anynines_com

space: production

skip_cert_check: false



...



jobs:



...



- name: deploy-to-production

plan:

- get: uaa-build

passed: ['deploy-to-test']

- task: add-manifest-to-uaa-build

config:

platform: linux

inputs:

- name: uaa-build

outputs:

- name: uaa-build-with-manifest

image_resource:

type: docker-image

source: { repository: wolfoliver/uaa }

run:

path: sh

args:

- -exc

- |

cp uaa-build/* uaa-build-with-manifest

export WAR_PATH=`cd uaa-build-with-manifest && ls cloudfoundry-identity-uaa-*.war`

cat < > uaa-build-with-manifest/manifest.yml

applications:

- name: uaa

memory: 512M

path: ${WAR_PATH}

host: prod-uaa

services:

- uaadb

env:

JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '[enabled: true]'

JBP_CONFIG_TOMCAT: '{tomcat: { version: 7.0.+ }}'

SPRING_PROFILES_ACTIVE: postgresql,default

UAA_URL: https://prod-uaa.aws.ie.a9sapp.eu

LOGIN_URL: https://prod-uaa.aws.ie.a9sapp.eu

EOT

- put: production_deployment

params:

manifest: uaa-build-with-manifest/manifest.yml



When you extend the pipeline with the job above it should look like this:

Note that there is a broken line between the "uaa-build" resource and the "deploy-to-production". This is because you have open the job and start a build manually if you want to deploy a new UAA version to production.

Not Finshed Yet

The last two chapter are not written yet and because writing this tutorial was quite time-consuming so far I would like to get some feedback first before I continue.



If you think the last two chapter are worth to get written just drop me a message on twitter.