Moving my blog to AWS

April 19 2019 · tech aws

Up until now my blog has been hosted on the Google Cloud Platform (GCP). At work I’m in studying for the Amazon Web Services (AWS) Developer Associate Certificate so I wanted some more hands on experience with AWS. Solely for the sake of learning I have moved my blog from GCP to AWS.

On GCP I was running the following setup:

  • Compute instance running Nginx serving the website
  • SSL certificate issued by Lets Encrypt
  • Photos stored on the Google Cloud Storage service

On AWS the setup for this supposedly simple static site is more complex, though no more needing to maintain the server instance. I’m running:

  • Two S3 buckets. One for the website, the other for my photos.
  • A Cloudfront distribution for each bucket.
  • A Route53 hosted zone for each Cloudfront distribution. All traffic to and goes to the website Cloudfront distribution, all traffic to goes to the photos distribution.
  • A single SSL certificate registered through AWS Certificate Manager.

There is good documentation published by AWS on how you can set this up yourself. A couple of things I noted following this setup.

  • Your domain name doesn’t have to be registered with AWS in order to use Route53. In Route53 just create a hosted zone for your domain, then in your domain’s nameserver settings enter the NS records for the Route53 addresses. More here.
  • If your bucket name is you can still serve requests to through some Route53 and Cloudfront black magic using A Alias Records.
  • Cloudfront wasn’t serving the index.html file by default in subdirectories. For example a GET request to would return the root index.html, but a request to would return an S3 403 error rather than returning aurora/index.html. To fix this behaviour in the AWS Console navigate to the Cloudfront distribution and under Origin and Origin Groups the Origin Domain Name and Path must be set to the S3 website URL (eg., not the raw S3 URL (

Link to this section Deployment

In my old setup the deployment script was a simple rsync. Here it isn’t much more complicated. It uploads the files to S3 and then invalidates the Cloudfront cache so that changes are immediately reflected.

cd public && aws s3 sync --acl public-read . s3://

aws cloudfront create-invalidation --distribution-id ABCDEF --paths '/*'

Related Posts