Today's blog post is a glance back at my “Continuous Integration in .NET Projects” post from 2013. It looks at optimizing your development process from today's perspective. We’ll identify some red flags to be on the lookout for, suggest best practices and offer tools and insights into our development process.

You’re in trouble if you find any of the following in your software development process:

You’re deploying your applications manually, it’s a pain in the neck and it takes a lot of time.

You manually perform source code integrations, and conflicts often occur.

You manually edit application configuration in staging or production environments.

You tell other developers that “it works on my machine” and the problem is in their code.

You have automated tests, but don’t rely on them.

You often have a last-minute hotfix in production.

You feel like you work as a “fireman” more often than as an engineer.

Fear not – even if you’re feverishly nodding in agreement, this article should help you correct your course.

Continuous Integration vs. Continuous Delivery vs. Continuous Deployment

To begin, let's define the differences among these three practices:

Continuous Integration is an automated process that integrates source code changes and merges all developer working copies into a shared mainline several times a day. The feedback loop is part of this process.

is an automated process that integrates source code changes and merges all developer working copies into a shared mainline several times a day. The feedback loop is part of this process. Continuous Delivery is an extended Continuous Integration process with additional steps, producing an installable outcome of the software or a delivery package (which can be a binary file, a zip archive or an installer executable). This process works if you don’t rely on automated tests and you need to control when software is shipped to production.

is an extended Continuous Integration process with additional steps, producing an installable outcome of the software or a delivery package (which can be a binary file, a zip archive or an installer executable). This process works if you don’t rely on automated tests and you need to control when software is shipped to production. Continuous Deployment is an extended Continuous Delivery process with additional steps automating deploying to staging and production.

The Continuous Integration, Continuous Deployment and Continuous Delivery practices extend the feedback loop of the rapid release cycle:

Based on our experience, the best way to improve the rapid release cycle is to use the right tools:

BitBucket as the source control service, though you can use any other GIT service or server.

SourceTree as a GIT client for Windows and Mac. It supports the git-flow process right out of the box.

TeamCity from JetBrains as the builds and deployments server. We use it with six build agents for .net, java and iOS projects.

Windows Azure and AWS cloud platforms. These platforms have powerful APIs, allowing you to perform infrastructure automations.

NewRelic as the uptime and performance monitoring tool.

RayGun as the errors tracking platform.

Say hello to best practices

Here is a general list of best practices for software engineers across various industries and fields:

Always use a version control system . There isn’t a single reason not to. Everyone uses it, and it’s easy to implement. You can use version control as a SaaS service using BitBucket, GitHub, TFS Online and others, or you can install a version control server on your own hardware. We recommend any flavor of GIT. On top of that, you should consider the git-flow process. The combination of BitBucket, SourceTree and git-flow works like a charm.

. There isn’t a single reason not to. Everyone uses it, and it’s easy to implement. You can use version control as a SaaS service using BitBucket, GitHub, TFS Online and others, or you can install a version control server on your own hardware. We recommend any flavor of GIT. On top of that, you should consider the git-flow process. The combination of BitBucket, SourceTree and git-flow works like a charm. Automate the build . A common mistake is not to include everything in the automated build. The automated build should not only get the latest source code from the repository and compile it, but also stop/start applications and processes, execute scripts to update databases if necessary, manage application configuration and so on. You can use any automated build server from a long list that includes Codeship, TravisCI, SemaphoreCI, CircleCI, Jenkins, Bamboo, TeamCity and others.

. A common mistake is not to include everything in the automated build. The automated build should not only get the latest source code from the repository and compile it, but also stop/start applications and processes, execute scripts to update databases if necessary, manage application configuration and so on. You can use any automated build server from a long list that includes Codeship, TravisCI, SemaphoreCI, CircleCI, Jenkins, Bamboo, TeamCity and others. Your build should be self-testing . Include automated tests in the build process to catch bugs faster and with higher efficiency. At a minimum, you should prepare unit tests and integration tests. Automated tests for user interfaces are also a very good idea (see the magic of http://seleniumhq.org/), but you should consider running these as a separate process to keep your builds as fast as possible.

. Include automated tests in the build process to catch bugs faster and with higher efficiency. At a minimum, you should prepare unit tests and integration tests. Automated tests for user interfaces are also a very good idea (see the magic of http://seleniumhq.org/), but you should consider running these as a separate process to keep your builds as fast as possible. Commit at least every day . The more frequent, the better. The single most important prerequisite for a developer committing to the main branch is that their code can build correctly. This, of course, includes passing the build tests. As with any commit cycle, the developer first updates their working copy to match the main branch, resolves any conflicts with the mainline and then builds on their local machine. If the build passes, then they’re free to commit the changes.

. The more frequent, the better. The single most important prerequisite for a developer committing to the main branch is that their code can build correctly. This, of course, includes passing the build tests. As with any commit cycle, the developer first updates their working copy to match the main branch, resolves any conflicts with the mainline and then builds on their local machine. If the build passes, then they’re free to commit the changes. Keep the build fast . The whole point of Continuous Integration is to provide rapid feedback. A build that takes 30 minutes to an hour is unreasonable. If you're starting a new project, think about how to keep the build fast. Bottlenecks usually occur at integration and user interface tests.

. The whole point of Continuous Integration is to provide rapid feedback. A build that takes 30 minutes to an hour is unreasonable. If you're starting a new project, think about how to keep the build fast. Bottlenecks usually occur at integration and user interface tests. Test in a staging environment before deploying software on production . No, seriously, you need to test. Surprised? (/sarcasm).

. No, seriously, you need to test. Surprised? (/sarcasm). Make it easy for everyone to get the latest executable . Anyone involved in the project should be able to get the latest executable and run it for demonstrations, exploratory testing or just to see what changed this week. Prepare build scripts in a way that allows you to run build from any selected branch – this will help when you need to build a staging version from a feature branch, but you’re not willing to merge to the master yet.

. Anyone involved in the project should be able to get the latest executable and run it for demonstrations, exploratory testing or just to see what changed this week. Prepare build scripts in a way that allows you to run build from any selected branch – this will help when you need to build a staging version from a feature branch, but you’re not willing to merge to the master yet. Everyone should see what's happening . One of the most important things to communicate is the state of the build. Failed builds should be reported via email or RSS feed. If you use #slack, you should integrate it with your build server, as well. Only notify your team about failed builds.

. One of the most important things to communicate is the state of the build. Failed builds should be reported via email or RSS feed. If you use #slack, you should integrate it with your build server, as well. Only notify your team about failed builds. Prepare environments. Whether you only want to set up Continuous Integration practices, or your plan is to implement full stack with Continuous Deployment, you will always need to set up multiple environments (development, integration and/or staging and production) that will run your builds, tests and, of course, production.

Environments

You should have development, integration, staging, and, obviously, production environments.

The development environment is usually a developer machine installed with the required software.

The integration environment is a separate machine or machines that runs Continuous Integration processes, periodically runs auto-build scripts and reports the status of builds.

While difficult, you should try to have a staging environment that’s as close as possible to your final production environment.

Keep production environment resources separate and fully isolated from integration and staging. Usually, you want the whole team to access resources in the integration and staging environments, while access to production resources is restricted.

Database continuous integration strategy

Database changes (migrations) and corresponding code changes must always be deployed together. When deploying software to an environment, code files and libraries may be deleted or overwritten. Database changes, however, must be intelligently manipulated so as not to destroy vital business data.

Successful database migration management requires a consistent strategy applied by all team members.

Database migrations as incremental SQL file

This approach describes the process whereby developers are responsible for issuing database schema updates as an SQL file incrementally per feature. Database migrations are executed as part of the build process in the Continuous Integration server running each SQL file in transaction, ensuring that each SQL file is executed once per database.

Using the SQL file-per-migration approach, you’ll have a structure similar to this:

Based on our experience with this approach, we recommend that you use a date-time filename prefix to keep the correct file order. Additionally, we recommend using the Tarantino DatabaseChangeManagement utility to simplify database migration tasks on the continuous integration server side.

Find more information here: http://code.google.com/p/tarantino/wiki/DatabaseChangeManagement.

Database migrations in the source code

This approach is an alternative to creating lots of SQL scripts that have to be run manually by every developer in the development environment. The process describes how database schema changes are described in classes written in C# that can be checked into a version control system. Schema updates are executed as part of the build process in the Continuous Integration server, or can be initialized on application start – that’s up to you.

For more information, check out: https://github.com/schambers/fluentmigrator

Using the source code file-per-migration approach, you’ll have a structure similar to this:

Here’s a migration example from the BetterCMS Blog module using FluentMigrator:

namespace BetterCms.Module.Blog.Models.Migrations { [Migration(201312051034)] public class Migration201312051034: DefaultMigration { /// <summary> /// Initializes a new instance of the <see cref="Migration201312051034"/> class. /// </summary> public Migration201312051034() : base(BlogModuleDescriptor.ModuleName) {} /// <summary> /// Ups this instance. /// </summary> public override void Up() { Create .Column("DefaultMasterPageId") .OnTable("Options") .InSchema(SchemaName) .AsGuid().Nullable(); Create .ForeignKey("FK_Cms_BlogOptions_Cms_Pages") .FromTable("Options").InSchema(SchemaName).ForeignColumn("DefaultMasterPageId") .ToTable("Pages").InSchema((new RootVersionTableMetaData()).SchemaName).PrimaryColumn("Id"); } } }

Build and deployment strategy

The build and deployment strategy for the integration, staging and production environment is very similar. Let’s review strategy steps per practice, and what happens after a developer implements a new feature, builds a new version of the application locally and pushes the new feature to the source control server.

Continuous integration strategy steps:

The builds server sees the source code changes and automatically calls the build script.

Build script compiles source code with the the DEBUG configuration of the application.

Build script runs database migrations on the integrations environment database.

Build script updates the configuration files with the integration environment configuration (appsettings, connectionstrings, logging rules,etc.).

Build script runs all unit and integration tests.

If tests are OK, build script initiates user interface tests as a separate process, if available.

If any step fails, a corresponding project team is informed about the integration issue.

Continuous delivery strategy steps:

The builds server sees the committed source code changes, but waits until:

a) The Continuous Integration process successfully finishes all integration steps.

b) If user interface tests are executed in a separate process, waits until the UI tests process successfully finishes all tasks.

Build script executes Continuous Delivery steps, referencing the corresponding source code version. The source code revision is received from the Continuous Integration process as it indicates quality of the committed source code.

Build script compiles source code with the the RELEASE configuration of the application.

Build script prepares database migrations as part of the delivery package.

Build script prepares the configuration files as part of the delivery package (appsettings, connectionstrings, logging rules, etc.).

Build script uploads the generated delivery package to a packages repository (AWS S3 bucket, shared folder on network,etc.). Delivery package includes release version number, date time, source code revision number.

If any step fails, the corresponding project team is informed about the integration issue.

Continuous Deployment strategy steps:

The build server is configured in exactly the same way as in the Continuous Delivery version.

Build server tracks the delivery packages repository for new releases.

After the new release package is available, it’s queued to the auto deployment process.

The auto deployment process extracts the queued delivery package, stops corresponding apps and processes, uploads extracted files, updates configurations, runs database migrations and starts applications and processes again.

For distributed applications (e.g., a web site on multiple web servers), it’s a common scenario to deploy the new version in multiple steps without downtime for the end user. This can be achieved by handling load balancer traffic.

CAUTION! Novice engineers often forget to set up an automated rollback script. Usually, the rollback script is manually triggered to referencing the last known stable release version. In order to have a reliable “go back” function, you need to prepare both “go up” and “go down” database migration scripts during features development. It will take a bit more time for features development, but believe me, in an emergency situation you’ll be more than happy to have it.

Summary

Setting up a Continuous Integration, Continuous Delivery and Continuous Deployment environment adds a certain overhead to the project, but the benefits far outweigh any inconveniences. Collaboration among software engineers becomes simpler, with less time spent on environment and database synchronization. Testing and deployment drops in complexity, and last-minute surprise issues and burning fixes practically disappear.

This blog post notes an additional step of distributed application releasing in the Continuous Deployment practice. This is a very broad topic, which will be reviewed in future blog posts.