I’ve said it before and I will repeat myself on this because it’s an important concept:

DevOps is about culture and communication, not tools

Now, that said, to implement the automation required in DevOps, you’re going to have to get into some degree of tooling. There are a whole slew of possible tools to support you: Jenkins, Team City, Octopus and more.

All these tools offer excellent solutions with variations on limits, methodologies, etc. You’ll need to explore them to understand which ones are best for you and your processes.

I’ve been doing a lot of work lately in another tool, Azure DevOps. Let me show you a little of what I’ve done.

Azure DevOps Pipelines

I don’t mean for this to be a complete tutorial on setting up Azure DevOps (see the bottom of this post for my all-day, in-person, teaching sessions, where I will). I just want to discuss a set of pipelines I’ve built and why I built them that way as a method for illustrating how you can begin automating your processes in support of a DevOps implementation.

The most important concepts when we get to building and deploying within Azure DevOps are Builds and Releases. Yes, we can get into all the fun of talking about Deployment Groups (sets of servers for simultaneous deployment) and others, however, we have to first get a successful and then define how that build will get deployed/released; Builds and Releases.

Azure DevOps Builds

My system currently has two different builds for my HamShackRadio database (just made up):

Each of these builds is pulling data from source control. Starting with the bottom one, it’s meant as a Continuous Integration (CI) build. When code gets pushed to the main branch, a CI build fires (yeah, other ways could be used, pull requests, what have you. This is the start of the choices we have). The top one is meant to build my code for my pre-production, or staging, environment in preparation for a production deployment.

The steps that each build does matters. Here are the steps of the CI build as currently configured:

My source is a Visual Studio project stored in the Git source control in Azure. I pull from there, on the master branch as you see above, triggered, again, by the code getting pushed to master. Yeah, I’m using Redgate tools for this, but the thought processes for any other tool set would be the same. I’m running a set of tSQLt tests from SQLCop to validate appropriate code coverage. Then I’m creating a clone database (small, fast) for the CI test based on a cleaned copy of the production database. From that, I build my deployment, a nuget package.

The choices made here are all based on the needs for CI. First, and most importantly, everything comes from source control. Next, I run tests to validate the code prior to attempting builds. Yes, the dev teams should have done this on their own, but, DevOps and automation gives us the abilities to add additional testing and insurance. If the tests fail, the build fails. Here’s what a successful set of tests looks like:

If the tests pass, then the build runs and I get T-SQL in a nuget package, ready for the Release to the CI database (created at the Clone step).

That’s just one set of methods, directly in support of CI. If you were to look at the other build, it does a completely different set of steps because my needs for deployments in the case of pre-production are different. You’ll need to step through the same kinds of thought processes to determine what’s needed within your environment.

Let’s talk about the releases.

Azure DevOps Releases

I have quite a few more releases than I do builds:

From the top, my Pre-production release deploys the pre-production build you saw earlier and does some automated tests. This is done against a copy of the production database (cloned copy, I love throwing Redgate tools at this stuff). This release only occurs after a successful pre-prod build, but it’s automated from that.

Next is my Production release. It uses the same pre-production build artifact that I used in the previous release. I don’t rebuild a deployment unless I test it first.

Next, I have a scheduled QA deployment. It runs once a day and uses the last successful CI build and release. It also runs a whole bunch of specialty tests after the release.

Finally, I have my CI release, which also runs additional CI tests to ensure things are ready for going to QA.

Just so you can see the details of one of these processes, here’s the pre-production release:

You can see that I’m sourcing, not from source control, but from the build artifacts, the Nuget package from the HamShackRadio-CI build. Then, two steps are executed:

You can see that we’re using Clone again from the clean copy of the Production database. Then we’re doing the deployment.

The goals are simple. Release to different environments, on different schedules, with different triggering mechanisms and different success criteria. All this in support of automating your database deployment and testing.

Conclusion

If this is your first time looking at automating deployments, all this can seem daunting, a ton of different moving parts (I haven’t talked about setting up agents, integrating with kanban, etc.). However, it’s all based on decisions you’ll make about your needs. I need a fast, immediate set of tests as developers work. Cool, let’s implement a Continuous Integration build and release. Add tests to that so that we reduce the QA workload to only the important stuff. Done.

Continue from there, defining your needs for each environment and the defining the builds and releases around the environments. Don’t be surprised when you spend a lot of time reworking these steps. You’ll find that your process changes as you automate it. “Wait, we can have it run these checks on every deployment? Add that step now.”

The real key is to working within the toolset you have in order to accomplish the necessary automation your process. Your process is going to be different than mine. However, Azure DevOps Pipelines is extremely flexible and will support just about anything you can think up. Heck, if nothing else, throw a PowerShell script step in wherever you need one.

Want to know a WHOLE lot more about how to do this type of thing? Then I’ve got some opportunities for you:

SQLSaturday Indianapolis Precon, Friday August 16th, 2019. Click here now to register.

SQLSaturday Oslo Precon, Friday August 30th, 2019. Click here to register.

Share this: Twitter

Facebook

Reddit

LinkedIn

Tumblr

WhatsApp

Pocket

Email

