Serverless Primer

Talk is cheap. Show me the code.
– Linus Torvalds

Serverless

Serverless, something that you’re probably hearing about a lot these days; and it’s something I love. This website’s contact me is a even serverless app. But what is it exactly?

Traditionally you would build a web application running on node, Tomcat, or something similar. This means that you’ve got an underlying server to manage as well, in addition to services you need to run. In most cases, there’s more than likely some sort of database and datastore; this is something else that must be managed. What if you don’t want to manage all of that stuff? Well that’s where serverless comes into play.

  • Instead of traditional compute such as EC2 or on-prem baremetal/VMs, with serverless we can run our code in Lambda. Think micro-services and stateless, not a giant monolith. You no longer are paying for a machine to be always up, you only pay for what’s being used. No need to manage system updates, that all happens in the background; no more downtime.

  • Most of the time you’ve got an app that plugs into a database of sorts. This is always a pain to maintain and tweak to run well, especially when you want something HA. If you need a relational database you could get RDS or you could look into DynamoDB if you need noSQL. With the databases, RDS isn’t serverless, but it is managed; meaning we don’t deal with maintenance and that fun stuff. However, DynamoDB is serverless solution.

  • For the front end something like nginx or Apache web server is normally used. But we can easily offload this to an S3 bucket, and front it with CloudFront. A nice perk here as well is, we get SSL certs handled by AWS on the CloudFront distribution.

  • For storage in a serverless world, S3 object storage is king. And the best part, no pre-provisioning; it’s all you can eat!

There’s a plethora of other serverless services, that can help you build an application.

Did I mention, serverless stuff scales really well?

Serverless Framework

How can all this serverless stuff be managed? For this I’m a fan of the serverless framework. The serverless framework can help us easily deploy our applications to any of the major cloud providers. It’s easy to integrate into a CI/CD pipeline if you choose, and they also offer a neat web portal you can use.

Isn’t this the same thing as the AWS Serverless Application Model (SAM)? The SAM method is similar to the serverless framework, but SAM is much more like CloudFormation. Meaning, you can run into CFT issues you might see normally. With the serverless framework, there’s fewer limitations and can be easily extended with plugins. Serverless framework really simplifies the deployment process; less steps required to do the same thing.

Let’s Build An App

Let’s take a look at this serverless thing in action. Install serverless framework and get a python venv going. npm install -g serverless and python3 -m venv venv can do this. You should have your AWS cli configured as well.

This is going to be a simple app that’ll take a post request with an API gateway, and send to our first Lambda handler. From there we send to an SQS queue, to be processed by another Lambda; and ultimately email with message with SES.

API GW -> Lambda1 -> SQS -> Lambda2 -> SES

The Code

For this, I’m going to be using Python for this; hence why we got a Python venv going above. I’m running Python 3.9 on my local machine. It’s worth noting the Lambda runtime we’ll use is Python 3.8, but this code will work. Note: All of the code can be found here to follow along with.

In sqs/handler.py you can find the primary handler I’m using for this app. I’ve got four functions in here, but the following two at the most important; as they get kicked off from events. In this case an API gateway event and SQS queue event.

This first function is the handler to be called on a post event on the /messages route. The function parses the request and looks for the message payload. Something like {"message":"yo yo yo yo"}. We return some http headers, and a response to let the client know things went well.

1
2
3
4
5
6
7
8
9
10
def api_gw_post_message(event, context):
"""
Receive http PUT from the Api GW. Parse message and place in SQS queue
"""
message = json.loads(event.get("body")).get("message")
print(message)
message_to_sqs_queue(message)
headers = {"Access-Control-Allow-Origin": "*"}
body = '{"status": "OK"}'
return {"headers": headers, "statusCode": 200, "body": body}

api_gw_post_message Calls another function I have that places the message into an SQS queue. When there’s a message in the queue, the following function processes that message. This function sends the message to the fourth function send_email_ses, to send the message via SES.

Something to note with this configuration, having an SQS queue as a Lambda event source; means we don’t need to delete messages from the queue when we successfully process them. AWS handles this. If something bad does happen, the message is kept in the queue; and your Lambda try’s again.

1
2
3
4
5
6
7
8
9
10
11
def sqs_queue_event_handler(event, context):
"""
Triggered via messages in the SQS queue. Parses the queue's message,
and grabs our message. We take the message and call send_email_ses(message)
"""
print(event)
for record in event["Records"]:
message = record.get("body")
print(message)
print(send_email_ses(message))
return {"status": "OK"}

Infrastructure as Code

So we’ve got some code, but how do we run this? Well, we’re going to run it on AWS; and let the serverless framework do the heavy lifting. In serverless.yml (found here) we get to see how easy it is to get going.

In the first bit here we give the app a name, and input some important information.

  • provider name - Aws/Azure/Google? AWS for this

  • runtime - Python, node, go? What does you app run on?

  • stage - Prod, stage, dev? What stage to deploy

  • region - us-east-1, us-west-1? Where is this going to run?

  • stackTags - What do you want these resources tagged with?

  • environment - What environmental variables do you need?

  • logRetentionInDays - How long should logs live in Cloudwatch?

  • iamRoleStatements - Define a role policy, that the app will leverage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
service: python-sqs-srv

provider:
name: aws
runtime: python3.8
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'us-west-2'}
stackTags:
billingproject: ${self:service}
environment:
SQS_URL:
Ref: testQueue
SENDER_EMAIL: ${opt:send_from, 'foo@bar.com'}
SENDTO_EMAIL: ${opt:send_to, 'bar@foo.com'}
logRetentionInDays: 5
iamRoleStatements:
- Effect: "Allow"
Action:
- "sqs:SendMessage"
Resource:
Fn::GetAtt: [ testQueue, Arn ]
- Effect: 'Allow'
Action:
- 'ses:SendEmail'
Resource: ['*']

Here’s the next important part. This is where we define the handler functions, to events. We can have a bunch more functions if we want, with all sorts of event triggers we could use.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package:
individually: true
exclude:
- '*/**'

functions:
api_gw_post_message:
handler: sqs/handler.api_gw_post_message
memorySize: 128
package:
include:
- 'sqs/**'
events:
- http:
method: post
path: messages
cors: true
sqs_message_handler:
handler: sqs/handler.sqs_queue_event_handler
memorySize: 128
package:
include:
- 'sqs/**'
events:
- sqs:
arn:
Fn::GetAtt: [ testQueue, Arn ]

And finally, we create the SQS queue. One less thing to deal with in the console! You can see above how we plugin the resource created below Fn::GetAtt: [ testQueue, Arn ].

1
2
3
4
5
6
7
8
resources:
Resources:
testQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:custom.queueName}
custom:
queueName: ${opt:queue, 'python-first-queue'}

Something to note here, I am using AWS SES. To make this work, I had to verify my domain for sending, and an email to send to. Restrictions on who you can send to can be removed, if you request to get your account out of sandbox mode.

Deploying

We’ve got all of the ingredients ready, now it’s time to go in the oven. This is where the fun stuff happens. Let’s run the following command sls deploy --send_from noreply@mikemiller.tech --send_to foo@bar.com

Here’s the output we get from the command. You can also see this stack in the CloudFormation console.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[mmiller@Mikes-MacBook-Pro-13 Lambda_SQS (master)]$ sls deploy --send_from noreply@mikemiller.tech --send_to foo@bar.com

Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Excluding development dependencies...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
........
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service api_gw_post_message.zip file to S3 (11.14 KB)...
Serverless: Uploading service sqs_message_handler.zip file to S3 (11.14 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
................................................
Serverless: Stack update finished...
Service Information
service: python-sqs-srv
stage: dev
region: us-west-2
stack: python-sqs-srv-dev
resources: 17
api keys:
None
endpoints:
POST - https://abcd1234.execute-api.us-west-2.amazonaws.com/dev/messages
functions:
api_gw_post_message: python-sqs-srv-dev-api_gw_post_message
sqs_message_handler: python-sqs-srv-dev-sqs_message_handler
layers:
None

In Action

We’ve got our API gateway’s endpoint https://abcd1234.execute-api.us-west-2.amazonaws.com/dev/messages. Let’s test this thing out with curl -

1
2
[mmiller@Mikes-MacBook-Pro-13 Lambda_SQS (master)]$ curl -XPOST https://abcd1234.execute-api.us-west-2.amazonaws.com/dev/messages -d '{"message":"yo yo yo yo"}'
{"status": "OK"}

In my email

1
2
3
4
5
6
7
8
From: <noreply@mikemiller.tech>
Date: Sun, Jan 3, 2021 at 16:00
Subject: email subject string
To: <foo@bar.com>


yo yo yo yo

It worked! 😎

Testing

We’ve built and deployed our app, and it seems to work. But we really should write some tests for the code we’ve got. How can we mock AWS resources for tests? That’s what moto is for. Make sure you’ve installed the requirements.txt file with pip. Here’s what our test file test_handler.py looks like.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
import unittest
import boto3
from moto import mock_sqs, mock_ses
import handler

API_GW_EVENT = '...' //sample event of mine
SQS_EVENT = '...' //sample event of mine

class TestHandler(unittest.TestCase):
@mock_sqs
def test_message_to_sqs_queue(self):
print("\nRunning test_message_to_sqs_queue")
sqs = boto3.resource("sqs")
queue = sqs.create_queue(QueueName="test-sqs-message")
handler.QUEUE_URL = queue.url
message = "Testing with a valid message"
expected_message = str(message)
handler.message_to_sqs_queue(message)
sqs_messages = queue.receive_messages()
assert (
sqs_messages[0].body == expected_message
), "Message does not match expected"
assert len(sqs_messages) == 1, "Expected exactly one message in SQS"

def test_message_to_sqs_queue_exception(self):
print("\nRunning test_message_to_sqs_queue_exception")
queue = None
handler.QUEUE_URL = queue
message = None
self.assertRaises(Exception, handler.message_to_sqs_queue, message)

@mock_sqs
def test_api_gw_post_message(self):
print("\nRunning test_api_gw_post_message")
sqs = boto3.resource("sqs")
queue = sqs.create_queue(QueueName="test-sqs-message")
handler.QUEUE_URL = queue.url
headers = {"Access-Control-Allow-Origin": "*"}
context = {}
body = '{"status": "OK"}'
expected_data = {"headers": headers, "statusCode": 200, "body": body}
result = handler.api_gw_post_message(API_GW_EVENT, context)
assert result == expected_data

@mock_ses
def test_send_email_ses(self):
print("\nRunning test_send_email_ses")
ses = boto3.client("ses")
ses.verify_email_address(EmailAddress="test@example.com")
handler.send_email_ses.ses = ses
message = "Testing with a valid message"
response = handler.send_email_ses(message)
assert response["ResponseMetadata"]["HTTPStatusCode"] == 200

@mock_ses
def test_sqs_queue_event_handler(self):
print("\nRunning test_sqs_queue_event_handler")
ses = boto3.client("ses")
ses.verify_email_address(EmailAddress="test@example.com")
handler.send_email_ses.ses = ses
context = {}
expected_result = {"status": "OK"}
result = handler.sqs_queue_event_handler(SQS_EVENT, context)
assert result == expected_result

Run the tests

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(venv) [mmiller@Mikes-MacBook-Pro-13 Lambda_SQS (master)]$ python -m pytest -v
====================================================== test session starts ======================================================
platform darwin -- Python 3.9.0, pytest-6.2.1, py-1.10.0, pluggy-0.13.1 -- /Users/mmiller/Projects/Lambda_SQS/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/mmiller/Projects/Lambda_SQS
collected 5 items

sqs/test_handler.py::TestHandler::test_api_gw_post_message PASSED [ 20%]
sqs/test_handler.py::TestHandler::test_message_to_sqs_queue PASSED [ 40%]
sqs/test_handler.py::TestHandler::test_message_to_sqs_queue_exception PASSED [ 60%]
sqs/test_handler.py::TestHandler::test_send_email_ses PASSED [ 80%]
sqs/test_handler.py::TestHandler::test_sqs_queue_event_handler PASSED [100%]

======================================================= warnings summary ========================================================
venv/lib/python3.9/site-packages/boto/plugin.py:40
/Users/mmiller/Projects/Lambda_SQS/venv/lib/python3.9/site-packages/boto/plugin.py:40: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp

-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================= 5 passed, 1 warning in 2.76s ==================================================
(venv) [mmiller@Mikes-MacBook-Pro-13 Lambda_SQS (master)]$

And we’re good with testing! They could probably be enhanced a bit, but this works for now.

Updates

Say the app is up and running, but a new feature needs to be implemented. It’s as easy as running your sls deploy command again, and the stack is updated. If you setup some sort of CI/CD, this step could be automated for you.

Final Thoughts

Although the app shown here is pretty simple, it does showcase the power of serverless applications. Companies are moving towards serverless applications, that are a bunch of individual micro-services. The agility and maintainability makes it a clear choice for new cloud native apps. AWS and other providers are making serverless apps easier and easier; not to mention the serverless framework. Pairing serverless with a CI/CD solution makes maintenance and updates seriously painless.

The cost factor is a huge rational for serverless as well. Pay for what you need in most cases, and don’t worry about over/under provisioning. We like variable expense versus capital expense. Read more about AWS pricing here.

Give serverless a shot, it’s fun and costs next to nothing to get started.