When creating Django applications or using cookiecutters as Django Cookiecutter you will have by default a number of dependencies that will be needed to be created as a S3 bucket, a Postgres Database and a Mailgun domain. On top of this you'd most likely want to add basic monitoring, a DNS service, Google Search Console, Google Analytics etc.

We'll go through all of these services and setup all the resources declaratively(infrastructure as code) with Terraform. This will be then easily repeatable for any new Django application that needs to be deployed. Every service will be using a free plan, this whole setup costs 0$ to run monthly.

DNS with Cloudflare

Cloudflare provides CDN, DNS and various other services for free. Cloudflare is my to-go service when needing DNS services. Adding a Cloudflare record with Terrafrom by code is quite simple. Add the neccesary API keys for the provider as environment variables:

Or you can define them as variables in the provider, but I'd avoid it to not push sensitive secrets to git.

Now you can add the provider:

provider "cloudflare" {}

And lastly add the cloudflare record resource so we can access your application by the domain name:

resource "cloudflare_record" "hodovi_cc" { domain = "hodovi.cc" name = "hodovi.cc" value = "your-ip" type = "A" ttl = 1 proxied = true }

Cloudflare will now serve as a DNS and a CDN, caching your static content. I'd recommend by default to add two default rules, Flexible SSL (if you have not set up full SSL with Letsencrypt) , minifying html and css, email obfuscation(hiding any email from bots and scrapers) and automatic redirection from HTTP to HTTPS ensuring that any connection always uses HTTPS. All of these can be easily set up with terraform. We'll split always use https from all other rules since they target a different URL (non https URL).

Always use HTTPS

The required terraform resource:

resource "cloudflare_page_rule" "hodovi_cc_always_use_https" { zone = hodovi_cc target = "http://hodovi_cc/*" status = "active" actions { always_use_https = true } }

All other rules

The required terraform resource:

resource "cloudflare_page_rule" "hodovi_cc_default_rules" { zone = "hodovi_cc" target = "https://hodovi_cc/*" status = "active" actions { ssl = flexible email_obfuscation = "on" always_online = "on" } }

Cloudflare only allows 3 rules when using the free plan. One rule represents 1 target(url). So these rules use up 2 out of your 3 free rules.

Postgres DB

For the Postgres Database we'll use AWS database service RDS. AWS offers up to 750 hours of free t2.micro instances with 20 GB of storages per month. If you run one instance then 750 hours is enough for a whole month of uptime. A t2.micro instance should be fine for Django applications that are not data heavy. This setup includes daily backups as well. Note that this setup is a bit more complex requiring a VPC for the RDS instance. We use a Terraform module for the RDS instance. To grasp the parameters you'll have to look into the module as the topic is too big to cover in this post.

Note: This setup create the database in a public subnet and makes the database publicly accessible. I run my Django Applications within a GKE cluster therefore they do not share VPCs and the database needs to be publicly accessible. This setup has no failover as well, it is a single instance in a single az. Although if you'd like to add instances in multiple availability zones that should be achievable with this setup too.

module "rds_vpc" { source = "terraform-aws-modules/vpc/aws" name = "rds" cidr = "10.0.0.0/16" azs = [ "eu-west-1a", "eu-west-1b", "eu-west-1c" ] private_subnets = [ "10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24" ] public_subnets = [ "10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24" ] enable_dns_hostnames = true tags = { Terraform = "true" Environment = "${var.environment}" } } resource "aws_security_group" "rds" { name = "allow_rds" description = "Allow RDS inbound traffic" vpc_id = "${module.rds_vpc.vpc_id}" ingress { from_port = 5432 to_port = 5432 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "Allow access to the RDS instance" Terraform = true } } module "hodovi_rds" { source = "terraform-aws-modules/rds/aws" version = "~> 2.0" identifier = "hodovidb" parameter_group_name = "default.postgres11" create_db_parameter_group = false create_db_option_group = false subnet_ids = "${module.rds_vpc.public_subnets}" engine = "postgres" engine_version = "11.4" instance_class = "db.t2.micro" allocated_storage = 20 name = "hodovidb" username = "postgres" password = "${var.hodovi_rds_password}" # Set in terraform secret vars port = "5432" vpc_security_group_ids = [ "${aws_security_group.rds.id}" ] maintenance_window = "Mon:00:00-Mon:03:00" backup_window = "03:00-06:00" # Publicly accessible publicly_accessible = true tags = { Owner = "hodovi" Environment = "${var.environment}" Terraform = true } # Database Deletion Protection deletion_protection = true }

Django storages S3 bucket

A common way to handle static and media files is to store them in a S3 bucket. Cookiecutter Django offers both S3 buckets and Google Cloud solution. As I'm not that familiar with GCP's solution I'll go through setting up a S3 bucket.

Let's first just add the AWS terraform provider.

provider "aws" { region = "your-aws-region" }

Add your API credentials.

export AWS_ACCESS_KEY_ID = "my-access-key" export AWS_SECRET_ACCESS_KEY = "my-secret-key"

IAM user

First we'll create a IAM user that will will be used to access the static files. We'll call it deployer as I use it's credentials to access the AWS API and store the static files into the S3 bucket.

resource "aws_iam_user" "deployer" { name = "Deployer" tags = { description = "Deploy and access AWS resources" } }

Aws IAM policy Document

Setup an IAM policy document which gives full access to the user defined above. We'll also give read access to the static files to anyone.

data "aws_iam_policy_document" "hodovi_cc" { statement { sid = "PublicReadForGetBucketObjects" actions = [ "s3:GetObject" , ] resources = [ "arn:aws:s3:::hodovi.cc/*" , ] principals { type = "AWS" identifiers = [ "*" ] } } statement { actions = [ "s3:*" , ] resources = [ "arn:aws:s3:::hodovi.cc" , "arn:aws:s3:::hodovi.cc/*" , ] principals { type = "AWS" identifiers = [ aws_iam_user . deployer . arn ] } } }

S3 bucket

Create the S3 bucket and refer to the policy created before. Attach any tags you'd like.

resource "aws_s3_bucket" "hodovi_cc" { bucket = "hodovi.cc" policy = "${data.aws_iam_policy_document.hodovi_cc.json}" tags = { Name = "hodovi.cc" Environment = "${var.environment}" } }

Cors

If you use e.g Wagtail and need cross origin access for your S3 bucket just add the cors_rule section.

resource "aws_s3_bucket" "hodovi_cc" { bucket = "hodovi.cc" policy = "${data.aws_iam_policy_document.hodovi_cc.json}" tags = { Name = "hodovi.cc" Environment = "${var.environment}" } cors_rule { allowed_headers = [ "Authorization" ] allowed_methods = [ "GET" ] allowed_origins = [ "https://hodovi.cc" ] max_age_seconds = 3000 } }

Mailgun

Cookiecutter django uses Anymail with Mailgun's email service and it is an easy way to add "contact me" feature in to your application. It is free up to 10000 emails a month. I've created a terraform module that easily sets this up for you.

First we'll have to add a Mailgun provider. Download a Terraform provider mailgunv3 binary.

Then update your ~/.terraformrc to refer to the binary:

providers { mailgunv3 = "/home/example/downloads/terraform-provider-mailgunv3" }

Add the provider to your provider.tf. You can add your API key here but I use env variables to avoid adding sensitive keys into Git. Export your api key:

export MAILGUN_API_KEY = 'my-api-key'

If your from europe you can use mailguns EU api, if not just don't add the field base_url.

provider "mailgunv3" { base_url = "https://api.eu.mailgun.net/v3" }

Now you can use the module I've created.

module "hodovi_cc_mailgun_cloudflare" { source = "github.com/adinhodovic/terraform-mailgun-cloudflare" smtp_password = var . hodovi_mailgun_password # Set in tf secret vars cloudflare_zone = "hodovi.cc" mailgun_domain = "mg.hodovi.cc" spam_action = "tag" }

Monitoring Server Uptime with Statuscake

You'll most likely want to monitor application uptime and there are several service providers for monitoring uptime. For this setup we'll use Statuscake due to their free plan which should work fine for your django application.

The free plan provides uptime checks every 5 minutes, for a more frequent check rate you'll have to look into paid features.

As usual add the provider to your providers.tf

provider "statuscake" { username = "my-user-name" }

Export your API key:

export STATUSCAKE_APIKEY = "my-api-key"

Add your status cake test:

# Contact group 158719 = DevOps Team resource "statuscake_test" "https_monitoring_hodovi_cc" { check_rate = 300 contact_group = [ "my-contact-group-id" ] test_type = "HTTP" user_agent = "Status Cake" trigger_rate = "0" confirmations = "2" website_name = "hodovi.cc" website_url = "https://hodovi.cc" }

You'll have to add a contact group manually as terraform does not provide the resource. Then you'll have to replace the contact group id above with the one you created.

Slack Alerting Integration

Create a Slack app, add an incoming webhook to the channel of your choice. Now you can go to Statuscake and add an integration of the type Slack. Give it a name and add the webhook url. Now you can tie your contact group with the Slack integration. You'll now receive downtime alerts in Slack.

Google Search Console

Next we'll add your domain to google search console which will help you optimize your websites visibility and help monitor search result data for your website. Go to Google Search Engine and choose to add your domain. You'll get a site verification code that needs to be added as a TXT record.

I've created a terraform module which simply adds site verification as a TXT record to your DNS provider. The below example uses Cloudflare as the DNS service. The module supports AWS Route53 as well. If you don't use terraform you can set this manually up through your DNS Service.

The module uses the Cloudflare provider to add a TXT record

module "hodovi_cc_search_console" { source = "github.com/adinhodovic/terraform-txt-record" cloudflare_zones = [ { "zone" : "hodovi.cc" , "value" : "google-site-verification = my-site-verification" } ] }

Google Analytics

We'll use django-analytical to add Google Analytics tracking. Install and add django-analytical your requirements and add "analytical" to your installed applications.

Note: this is just plain django code.

Add the variable

GOOGLE_ANALYTICS_PROPERTY_ID = env("GOOGLE_ANALYTICS_PROPERTY_ID")

to your config. Now when you deploy add the property ID as an environment variable. Lastly add the required tags to your base template:

{% load analytical %} <!DOCTYPE html> < html lang = "en" > < head > {% analytical_head_top %} ... your content {% analytical_head_bottom %} </ head > < body > {% analytical_body_top %} ... your content {% analytical_body_bottom %} </ body > </ html >

Secrets

The mailgun secret and RDS secret is not stored in code, you can store these in a secret.tfvars file.

hodovi_rds_password = "my-password" hodovi_mailgun_password = "my-password"

Now when using

terraform apply

you'll specify the secret vars file:

tfaa -var-file = secret.tfvars

tfaa is an alias for terraform apply auto-approve taken from terraform-alias.

Outputs

Lastly we'll create a terraform outputs file to gather all resource endpoints we've created.

output "hodovi_rds_endpoint" { value = module . hodovi_rds . this_db_instance_endpoint } output "hodovi_s3_website_endpoint" { value = aws_s 3 _bucket . hodovi_cc . id }

Summary

By now you should have infrastructre as code for the following:

Cloudflare CDN DNS Flexible SSL Always HTTPS

AWS S3 Bucket for Static Storage

AWS RDS Database

Uptime Healthcheck with Statuscake Slack Alerts Email Alerts

Mailgun Domain

Google Analytics

Google Search Console

The monthly cost of all these services should be 0$ unless you exceed the threshholds for each service. This is a basic setup for all services, obviously if your application has high demands you'll have to scale the services.

We could setup an EC2 instance which is free as well (lowest tier up to 750hours a month) but I run a kubernetes cluster where all of my applications are deployed. And I'd assume most have this part figured out already with various cloud providers and various setups.