Up until now my blog has been hosted on the Google Cloud Platform (GCP). At work I’m in studying for the Amazon Web Services (AWS) Developer Associate Certificate so I wanted some more hands on experience with AWS. Solely for the sake of learning I have moved my blog from GCP to AWS.
On GCP I was running the following setup:
- Compute instance running Nginx serving the website
- SSL certificate issued by Lets Encrypt
- Photos stored on the Google Cloud Storage service
On AWS the setup for this supposedly simple static site is more complex, though no more needing to maintain the server instance. I’m running:
- Two S3 buckets. One for the website, the other for my photos.
- A Cloudfront distribution for each bucket.
- A Route53 hosted zone for each Cloudfront distribution. All traffic to
www.isthisit.nzgoes to the website Cloudfront distribution, all traffic to
static.isthisit.nzgoes to the photos distribution.
- A single SSL certificate registered through AWS Certificate Manager.
There is good documentation published by AWS on how you can set this up yourself. A couple of things I noted following this setup.
- Your domain name doesn’t have to be registered with AWS in order to use Route53. In Route53 just create a hosted zone for your domain, then in your domain’s nameserver settings enter the
NSrecords for the Route53 addresses. More here.
- If your bucket name is
www.isthisit.nzyou can still serve requests to
isthisit.nzthrough some Route53 and Cloudfront black magic using
- Cloudfront wasn’t serving the
index.htmlfile by default in subdirectories. For example a GET request to
isthisit.nzwould return the root
index.html, but a request to
isthisit.nz/aurora/would return an S3 403 error rather than returning
aurora/index.html. To fix this behaviour in the AWS Console navigate to the Cloudfront distribution and under Origin and Origin Groups the Origin Domain Name and Path must be set to the S3 website URL (eg.
www.isthisit.nz.s3-website-us-west-2.amazonaws.com/), not the raw S3 URL (
In my old setup the deployment script was a simple
rsync. Here it isn’t much more complicated. It uploads the files to S3 and then invalidates the Cloudfront cache so that changes are immediately reflected.
hugo cd public && aws s3 sync --acl public-read . s3://www.isthisit.nz aws cloudfront create-invalidation --distribution-id ABCDEF --paths '/*'