Lately I have been working on a single-page application (SPA). From a high-level perspective, the goal is to provide a documentation portal for a set of RESTful APIs. The APIs themselves were designed with Swagger, so all information could be easily discovered and consumed from the Swagger output (JSON file containing path, resource definitions, security details, so on). Additional information, not provided by Swagger, is written in markdown files and presented in a separate location (things like getting started guides, how to authenticate, so on).

The client development is done in Angular 2 and it's packaged with WebPack. If you are not familiar with Angular do not worry, it's merely a technical detail. The output of packaging our Angular code it's a index.html, a Cascading Style Sheet and a couple of JavaScript files (thanks to the magic of WebPack, JavaScript is aggregated to a small set of output files). Since all markdown files are written up-front and Swagger it's pre-generated, there's it no need for a back-end service. That's it, a simple web application.

For such simple application, hosting should be easy too, right? We could use Virtual Machines, but then we would have to setup and configure a Web Server, manage Operating System and Software updates, manage load-balancing and all the fun that comes with it (like server rotation, load distribution strategies and testability), install security certificates, delegation permissions, user roles, so on. Honestly, it's way too much work for such simple application. Keep in mind we are just serving static files.

For this challenge, Amazon Web Services offer quite a neat solution, meet AWS S3 or Amazon Simple Storage Service. It is described by Amazon as a "object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web. It is designed to deliver 99.999999999% durability, and scale past trillions of objects worldwide". In other words, this service can be described as a very reliable, auto-scalable, highly-customizable Dropbox for developers (funny enough Dropbox uses S3 as a file storage facility). The pricing model revolves around file transfer and space used, but honestly, it's quite cheap. The rules are quite simple, just create a new S3 bucket with a unique name, upload your files, enable 'website hosting' and configure bucket policy to allow read-only for anybody. The following set of screen-shots demonstrate how you can do this.

Login to your AWS account and navigate to S3 console Create a new S3 bucket, just keep in mind the bucket needs to be unique.

Highlight the newly created bucket, select 'Static Website Hosting', enable it and defined the default index file (i.e. index.html)

Define bucket policy by selecting 'Permissions'

By default, S3 buckets are private and not accessible to unauthorized users, however we want to use it as a website. To allow any public anonymous user, read access to all objects inside 'www.johnlours.com' bucket, the following policy can be used. The policy itself states that any action to get a bucket object, for any principal (or user), should be allowed on any object inside 'www.johnlouros.com' bucket.

{ "Version":"2012-10-17", "Statement":[ { "Sid":"AddPerm", "Effect":"Allow", "Principal": "*", "Action":["s3:GetObject"], "Resource":["arn:aws:s3:::www.johnlouros.com/*"] } ] }

From the bucket policy definition, let's take a closer look at 'Resource' property. It can be broken down in the following sections:

'arn:aws:s3:::' defines the Amazon Resource Name (arn), from AWS S3 service (aws:s3)

'www.johnlouros.com' bucket name

'/*' applied to all object in the bucket

That's all, now you can navigate to the endpoint provided in the 'Static Website Hosting' section to test your application. As a sanity check open a new browser in-private mode to ensure you're not browsing the website as an authenticate AWS user.

What do you think of this approach? I can let you know what there are a few limitations, but I will leave that to an upcoming article.