Serverless Blog & AWS CodePipeline

Hello friend.
– Elliot Alderson, Mr. Robot

Why a New Blog

So if you know me in person, you know I’m all about being cloud native; trying to go serverless where I can. This blog used to be a WordPress instance running in AWS Lightsail. Originally I had thought WP would be simple, and easy to maintain… Well no. I quickly found out that I don’t want to maintian a Wordpress instance. It’s a monolith that can quickly become a security issue, and I don’t wanna have to patch this continuously. Aside from just dealing with maintance, things like this can happen too.

Simple was a requirement for me, just writing a post in WP is a pain; the editor is bad. And no, I don’t want to install a bunch of plugins to make it better. CloudFront caching with WP (at least the Bitnami image..) was a bit painful, and it would never work properly.

And then there was the cost. Lightsail was $5 USD…That’s too much for a website. Yes, I’m aware I could save a bit with the $3.50 USD Lightsail instance; but that’s still too much. For my use case, I can have this blog serverless leveraging CloudFront and S3; for under $1 a month.

What’s New

So instead of Wordpress, I want to be serverless. I also didn’t want to get too fancy with this, didn’t want to reinvent the wheel here. I’ve decided to use a node framework called Hexo. Hexo is a static blog generator, that uses markdown. This is great, it means I don’t have to bother using editors that are overly bloated for my needs. And it makes it easy to make changes with a simple deployent pipeline. To make it look nice, I’m using the Cactus Hexo theme; with a number of changes…

The Workflow

This is a static website, but there still is a workflow around creating new content and deploying. On my end to create a new post, hexo new "post name" is issued; and I can write in markdown. Yay!

Once done editing, commit to git, and push to my repo living in Github. From Github AWS CodePipeline picks up the job and has CodeBuild, build the code. Then CodePipeline deploys the artifacts to an S3 bucket. The bucket is configured with CloudFront sitting in front of it, to handle ssl and the CDN stuff.

1
2
3
4
5
6
7
8
9
               +--------------------------------------------------------------------+
| AWS CodePipeline |
+------------+ | +--------------------+ +------------------+ +------------------+ |
| | | | | | | | | |
| Github +-> | AWS CodeBuild +->+ Deploy to S3 +->+ Clear CF Cache | |
| | | | | | | | | |
+------------+ | +--------------------+ +------------------+ +------------------+ |
+--------------------------------------------------------------------+

Getting up and running with AWS CodePipeline took under 10 minutes. Most of the configuration is point and click, and the entire process could easily be written in CloudFormation. CodeBuild does require some configuration that happens in buildspec.yml. This file just lets CodeBuild know what we need it to do for the build. Here’s my build file, not too much to it; we install some dependencies and build.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: 0.2

phases:
install:
commands:
- npm install -g hexo-cli
- npm ci

build:
commands:
- hexo generate

artifacts:
files:
- "**/*"
discard-paths: no
base-directory: public

Once this is deployed in the bucket, I invalidate my CloudFront distribution with the following Python. This is just another action in my deploy step in CodePipeline. At the end of this Lambda, I use put_job_success_result to let CodePipeline it’s all done.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import boto3
import time
import os


def lambda_handler(event, context):
allFiles = ["/*"]
client = boto3.client("cloudfront")
invalidation = client.create_invalidation(
DistributionId=os.getenv("DISTRIBUTION_ID"),
InvalidationBatch={
"Paths": {"Quantity": 1, "Items": allFiles},
"CallerReference": str(time.time()),
},
)

pipeline = boto3.client("codepipeline")
response = pipeline.put_job_success_result(jobId=event["CodePipeline.job"]["id"])
return response

A Few More Things

While hosting a website like this, it’s important to remember we’re not running a webserver; thus we aren’t getting all the response headers we might want. For example we want security headers.

Within my CloudFront distribution, I adjusted by behaviors by adding a Lambda function association; for origin response events. Wait, what? Whenever CloudFront returns a response, we call a Lambda function to add the headers we want; in the request. Here’s the code, running in a Node.js 12.x runtime -

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
'use strict';

exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;

headers['strict-transport-security'] = [{
key: 'Strict-Transport-Security',
value: 'max-age=63072000; includeSubdomains; preload'
}];
headers['content-security-policy'] = [{
key: 'Content-Security-Policy',
value: "default-src 'unsafe-inline' 'unsafe-eval' https: 'self' data:; upgrade-insecure-requests; frame-ancestors 'self'"
}];
headers['x-content-type-options'] = [{
key: 'X-Content-Type-Options',
value: 'nosniff'
}];
headers['x-frame-options'] = [{
key: 'X-Frame-Options',
value: 'DENY'
}];
headers['x-xss-protection'] = [{
key: 'X-XSS-Protection',
value: '1; mode=block'
}];
headers['referrer-policy'] = [{
key: 'Referrer-Policy',
value: 'same-origin'
}];
headers['permissions-policy'] = [{
key: 'Permissions-Policy',
value: "sync-xhr=(self)"
}];
headers['content-language'] = [{
key: 'Content-Language',
value: "en-US"
}];

// Not using this..yet
// headers['expect-ct'] = [{
// key: 'Expect-CT',
// value: "enforce, max-age=30"
// }];

delete response.headers["server"];

callback(null, response);
};

Another note on this Lambda@Edge implementation; this function must be created in us-east-1. Same goes for other services you might hook up to CloudFront.

One last thing to do is set this CORs policy on our bucket. This lets CloudFront compress objects properly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[
{
"AllowedHeaders": [
"Content-Length"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]

Might as well show the bucket policy as well. This is setup so CloudFront can get the objects. I’m using a conditional here, so that only CloudFront can make requests with the bucket; based on the Referer header. In my CF origin settings, I send the referer header; with a value I specify.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudFrontReadGetObjectVersion",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::static-site-bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "referertoken"
}
}
}
]
}

Why No AWS CodeStar For This

I decided against AWS CodeStar, as it’s almost the same as using AWS CodePipeline; with less flexibility. AWS CodePipeline allows you to leverage other services outside of AWS. With CodeStar, you’re required to use AWS CodeCommit. And I’m not very fond of that service, Github and Gitlab have my heart.

What’s Next

Well it’s time to get some content out! Should probably write up some CloudFormation for initial setup of the website and pipeline. Recently with work I’ve been very busy, part of the busy-ness was moving servers into a colocation…That’s going to be an upcoming post, and a big reminder why cloud services are king! And soon enough, I’ll get a post out regarding the Serverless Framework!

See you soon

Mike