In this post we will dissect a cloud based service. Focusing on the core elements involved in a Continuous integration and delivery pipeline, the ecosystem which revolves around it and the beneficial aspects of it.

Architecture below covers the major components of a cloud based service from a devops perspective. Each component compliments each other in multiple ways. The synergy helps remove friction of the process, friction within teams and enforces good software engineering principles.

I have attempted to break down the architecture which will help understand the big picture. Technology choices are subjective and can be changed depending on factors such as cost, familiarity of team and legacy reasons.

However if you are starting from scratch or somewhere in the middle of automating it all. This architecture can be replicated as a solid starting point which will give you a great mileage.

Why this stack?

Choosing a technology stack is a tricky business for so many reasons. There are always tradeoffs to evaluate. Let me list down a few non functional requirements which this stack provides.

Agility:

An automated CI/CD pipeline can really improve the rate of deployments and time to market. From source control web-hooks to Jenkins CI pipelines and from Packer AMI creation to Spinnaker deployment pipelines. There is a really good synergy between these components. A lot of integration comes out of box through plugins. Which makes your life as a devops engineer easier with plenty of documentation, examples and community support.

Manual deployments can take a toll on your devops team and can quickly become a bottleneck. Automating deployments with spinnaker can increase the number of deployments exponentially freeing devops for even more automation or contributing to the product development itself.

Flexibility

Spinnaker provides a lot of deployment strategies which can be used based on the size of the cluster, nature of the application and nature of the deployment itself (A/B testing for example).

Spinnaker deploying to autoscaling groups allows to have autoscaling policies making your infrastructure flexible. You even get deployment option to kuberneetes.

Automation

Native integration of Jenkins with version control and Spinnaker makes it really easy to automate pipelines. Using Ansible to provision AMIs allows you to re-use roles and make provisioning of new services really simple and quick.

System monitoring and telemetry is a key ingredient to provide a stable service and meet the service level objectives. This telemetry can have an added advantage of automating your deployments by integrating spinnaker with datadog. Key metrics can be defined for canary deployments and spinnaker can do full-on rollouts or rollbacks depending on the behaviour of metrics.

Ansible provides a solid framework to automate provisioning. There are tons of ansible roles available at your disposable, making provisioning a breeze. You can write commonly used tasks in a role, which then can be reused for multiple services.

Consistency

Using Terraform as an Infrastructure as a code tool, lets you have consistent infrastructure. It allows to have version control with a release process for easier tracking and rollbacks.

Release cycle for everything

With everything in version control, creating a release process is relatively straight forward. You get all the goodies like release tracking, easier rollbacks, code reviews and enforced documentation through release notes.

Agile manifesto recommends removing bottlenecks and dependencies on teams and individuals. With everything in version control there are no central dependencies and bottlenecks which helps break silos. Cross teams collaboration through pull requests become a possibility.

Immutability

Deploying prebaked AMIs by packer in production gives you an immutable infrastructure. With no in-place upgrades in place there is little risk of configuration drift. This adds to consistency and reliability of your service.

Reusability

Donot Repeat Yourself is a basic principle of software engineering. This stack allows you to use a have common modules which can be reused for different microservices. Which lets you automate and provision new services quickly and easily.

Ansible galaxy provides a vast range of roles at your disposal, which only needs to be included and configured for your environment. For example installing and configuring nginx can be a tedious process, but you can use a role maintained by nginx team to simplify the process.

Similarly you can find tons of modules for your infrastructure in terraform registry. Which can be imported in your project and you can have a very complicated architecture with very few configurations. Doing this in an on-prem environment could even take months.

How it works

From code to production (CI/CD)

For a long time I worked with what can be called a traditional on-premise setup. Working with Telestax I got the chance to see the internals of how a cloud service work.

The idea of this post was to share the essence of my journey from on premise mindset to cloud infrastructure. A lot of the paradigm shift came from the concepts of automation, agility, flexibility and delegation of responsibility of uptime and scalability to a cloud provider. For example immutability is a function of how easily you can get an instance ready. It wouldn’t work in an environment where you will need a week to provision a VM.

You maybe familiar with some or all of these concepts or even working with more advanced technology stacks and architecture. But this is the essence of my journey from a fixed on-prem mindset to a more evolutionary path of a cloud service.

A whole book can be written on this topic but I hope this short synopsis helped you learn something useful. Thanks for reading, as always comments are welcome.