Open the S3 web console and select your root domain S3 bucket. Click on Properties and scroll to Static Website Hosting. Select "Enable website hosting" and enter in your index and error documents. For me this was index.html and 404.html . Take note of the Endpoint URL listed here, you'll need that later.

What if someone's squatting on my domain name bucket? I found myself in this predicament. Someone was squatting on the paulstamatiou.com S3 bucket. I asked on the AWS forums and they replied stating that the buckets don't need to be exactly the same; it's just to make sure you don't get confused during the setup process.

The next step was to enable S3 static website hosting and get my Jekyll site running exactly as I wanted it before adding my domain. I had to create buckets where I would store my site's static files. Amazon suggests you create two: one for no-www and one for the www version of your domain. For example, www.paulstamatiou.com and paulstamatiou.com . But since we will be using CloudFront from the get–go, only one bucket is required (your root domain bucket). More on that later.

That only took a few minutes through the Route 53 web console. At this point I had Route 53 working with my old host Heroku.

I exported my DNS records from Zerigo 2 imported them on Route 53, waited a while for the new DNS records to be active for good measure, then pointed my registrar Namecheap to the new Route 53 servers.

When it comes to moving domains and hosts, the name of the game is minimizing downtime and DNS propagation. It's vital that you keep both sites running so that folks with old DNS records can still load a version of your site until the propagation completes.

I've known about S3 site hosting since it was launched but was always wary about how it would perform when I wanted to update current files. Previously I thought that CloudFront only updated files every 24 hours. That concern became a non-issue when I discovered a great ruby gem that calls CloudFront's invalidation API when pushing s3 sites.

There were other options available but moving over to Amazon S3 site hosting with CloudFront was the most intriguing:

I felt uneasy about this and wanted to make sure my site would just work at all times. I didn't want to leave myself with future technical debt due to a hacky solution.

I was able to successfully get my DNS running on Route 53 and pointing to my Heroku setup rather quickly. Though everything was working, I realized I was running an unsupported configuration. Namely that I was using DNS A records to point to the root of my domain. Heroku explains the problem with this zone apex approach on cloud hosting providers:

I decided to put my current dev work — designing and building new photoblog functionality to showcase my Japan trip photos — on hold to move away from Zerigo. I chose to switch to AWS Route 53 .

A few weeks ago my DNS provider Zerigo sent an email stating that due to recent infrastructure upgrades they would need to raise their prices. For my meager DNS needs that ended up being a huge price hike: from $39 per year to $25 per month 1 . Prices were set to take effect a month later.

Now you'll have to make this bucket public (unless you enjoy editing object ACLs for every file you upload). This can be done with a simple bucket policy:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::YOUR-ROOT-BUCKET-HERE.com/*" } ] }

Copy that policy, adding your bucket name where indicated. On the same S3 web management console page, scroll up to Permissions and expand it. Click on "Add bucket policy" and paste the policy. Now any files you upload to this bucket will instantly be web accessible via the Endpoint URL provided.

As mentioned earlier, we are only enabling site hosting and adding a public policy to the root domain bucket since a www bucket is not required when creating a CloudFront-backed S3 site.

Your first S3 site push

The next step is to get your static site running as you like before adding a custom domain. I'm going to speak in terms of my Jekyll setup but it should apply to other static site generators too.

I installed the s3_website gem. It looks for the default Jekyll _site folder to publish to S3. First run s3_website cfg create in the root of your Jekyll directory to create the s3_website.yml configuration file.

You will need to put your AWS credentials into that yaml file. To be safer, I recommend you create IAM AWS credentials and supply those instead. IAM stands for Identity and Access Management — basically you are creating a new user role that does not have 'root' access. In the event that your IAM keys are ever compromised, the attacker would only be able to mess with your bucket, not your entire AWS account. And since you likely already have your static site backed up on GitHub you'd only need to redeploy your site.

Amazon does a great job going through the specifics of creating IAM credentials so I won't delve into it. The gist of it is that you'll need to create a new group and user. You'll attach that user to a group for which you create a new policy that provides the group with S3 read/write access to only your single bucket where you have site hosting enabled. This group will also need to have CloudFront invalidation access. The new user will have its own access keys that you'll use to fill out s3_website.yml along with the bucket name.3

s3_id: ABCDEFGHIJKLMNOPQRST s3_secret: YourPasswordUxeqBbErJBoGWvATxq9TJJTcFAKE s3_bucket: your-domain-s3-bucket.com

Now you have the basic functionality needed to do your first push. Just run s3_website push . When it finishes, visit the Endpoint URL mentioned earlier to see your live site.

Fixing URLs

My previous blog setup stripped out all trailing slashes from my URLs even though Jekyll already generated posts in the "title-name/index.html" structure. With S3 site hosting you'll have to keep trailing slashes (unless you prefer having your URLs end with .html, I don't). While URLs without trailing slashes worked for me, there was an unnecessary redirect to the URL with a trailing slash. As such I made sure to change all site navigation and footers to link to pages with the trailing slash to bypass that redirect.

I changed the Jekyll permalink configuration in _config.yml from /:title to /:title/ . Then I made sure to change pages like about.html to about/index.html as well as update my sitemap.xml to have trailing slashes for all posts and pages.

In addition, I updated my canonical URL tag to include URLs with trailing slashes. The canonical tag helps Google differentiate between duplicate content and tells it which to use as primary.

<link rel="canonical" href="http://yourdomain.com{{ page.url | replace:'index.html','' }}" />

This is mandatory when hosting on S3 with CloudFront since users will be able to access your site from either the root or with www.4

Setting up redirects

At this point I only needed to get my URL redirects in place before I could continue setting up my domain. When I ran my site with WordPress I had a different permalink structure. My post URLs contained the year, month and day in addition to the post slug. I still receive quite a bit of web traffic from users clicking on old links to my posts on other sites. If I didn't take care of this with redirects, users would be presented with a 404 and have to search to find what they were looking for.

I used to have a few regular expressions with rack rewrite that did everything I needed. And before that, a simple .htaccess configuration. There is no equivalent with S3 site hosting that can accomplish this in a few lines. There are S3 bucket-level routing rules but they're fairly basic. However, S3 does allow 301 redirections on an object-level basis.

I used the s3_website gem to create objects for the old URL structure and specify a "Website Redirect Location" value. All that was necessary was filling out the redirects section of s3_website.yml in this format:

redirects: # /year/month/day/post-slug: post-slug/ 2008/04/05/how-to-getting-started-with-amazon-ec2: how-to-getting-started-with-amazon-ec2/

Unfortunately with over 1,000 posts on my site originally created in the old permalink format, it would take hours to manually fill out the configuration. Thanks to Chad Etzel for writing this bash one-liner that goes inside of my Jekyll _posts directory and outputs a list of post redirections ready to paste:

(for file in `ls _posts`; do echo $file | sed s/.markdown//g | awk -F- '{slug=$4; for(i=5;i<=NF;i++){slug=slug"-"$i}; print $1"/"$2"/"$3"/"slug": "slug}'; done) > map.txt

Now all I had to do was another s3_website push to update the site and create all of the empty objects that redirect to the new URLs. A quick test of several pages revealed that everything was redirecting smoothly.

Creating a CloudFront distribution

Go to the CloudFront web console and click Create Distribution. Follow along as documented by AWS here in this guide. Make sure to put your domain name — both with www and without — in the Alternate Domain Names/CNAMEs section.

Take note of the CloudFront URL as well as your Distribution ID. The latter can be found by selecting the distribution then clicking "Distribution Settings." Copy and paste this ID into your s3_website.yml file for cloudfront_distribution_id .

Wait for the distribution to deploy and ensure that your site loads when you visit your CloudFront URL.

Whenever you run s3_website push , the gem will now also tell CloudFront to invalidate its cache of the URLs you just updated (if any).

Note for people with over 1,000 pages: The CloudFront invalidation API has a limit of 1,000 files per invalidation request. You'll see an error like this at the end of an otherwise successful s3_website push :

AWS API call failed. Reason: <?xml version="1.0"?> <ErrorResponse xmlns="http://cloudfront.amazonaws.com/doc/2012-05-05/"><Error><Type>Sender</Type><Code>BatchTooLarge</Code><Message>Your request contains too many invalidations.</Message></Error><RequestId>288b0f29-7c58-82e2-3c21-73785f54b166</RequestId></ErrorResponse> (RuntimeError)

This means that if I update things on my site that affect all pages when generated, not all pages will update immediately when pushed. They won't be served until the CloudFront cache naturally runs its course, which is 24 hours by default. This is called the expiration period and can separately be controlled by adding a Cache-Control header to files when uploading or changing the Object Caching section of your CloudFront distribution and specifying a Min TTL. If you are using the s3_website gem, you can specify object-level cache control with the max_age setting.

Hooking up your domain to CloudFront

Visit the Route 53 web console and open up your hosted zone for your root domain bucket. You will only be editing two A records. One for the root and one for www (you might have previously added this as a CNAME, you can change this now).

Note: This step will start pointing your domain to the new site, away from your old server.

Click on the www A record, set Alias to Yes then place your cursor in the Alias Target field. A dropdown with your new CloudFront distribution should display. It may not load for you until a few minutes after the distribution has finished deploying. It would often hang for me at "Loading Targets..." too.

I was able to get it working by just pasting the CloudFront URL in there — it immediately detected it. Click Save Record Set. Do that again for the root domain A record. The values of your two A records should now begin with ALIAS:

Propagation

Your domain name will begin pointing to your new CloudFront distribution shortly. It may begin working for you in a few minutes, but it's best to keep your old server and DNS active for a few days.

You can force a root DNS server dig trace to update your computer's DNS if you like:

dig YOUR-DOMAIN.com +trace @a.root-servers.net

After a few days, log into your old server and see if there are still any requests being served. It should be idle or only receive hits by some bots occasionally. It's now safe to pull your old DNS. For me that meant deleting the domain from Zerigo.5

Congrats! Your site is now hosted on S3 with CloudFront (and very fast). From now on you can see all of your hosting expenses in one place: