Serverless Primer
Talk is cheap. Show me the code.
– Linus Torvalds
Serverless
Serverless, something that you’re probably hearing about a lot these days; and it’s something I love. This website’s contact me is a even serverless app. But what is it exactly?
Traditionally you would build a web application running on node, Tomcat, or something similar. This means that you’ve got an underlying server to manage as well, in addition to services you need to run. In most cases, there’s more than likely some sort of database and datastore; this is something else that must be managed. What if you don’t want to manage all of that stuff? Well that’s where serverless comes into play.
Instead of traditional compute such as EC2 or on-prem baremetal/VMs, with serverless we can run our code in Lambda. Think micro-services and stateless, not a giant monolith. You no longer are paying for a machine to be always up, you only pay for what’s being used. No need to manage system updates, that all happens in the background; no more downtime.
Most of the time you’ve got an app that plugs into a database of sorts. This is always a pain to maintain and tweak to run well, especially when you want something HA. If you need a relational database you could get RDS or you could look into DynamoDB if you need noSQL. With the databases, RDS isn’t serverless, but it is managed; meaning we don’t deal with maintenance and that fun stuff. However, DynamoDB is serverless solution.
For the front end something like nginx or Apache web server is normally used. But we can easily offload this to an S3 bucket, and front it with CloudFront. A nice perk here as well is, we get SSL certs handled by AWS on the CloudFront distribution.
For storage in a serverless world, S3 object storage is king. And the best part, no pre-provisioning; it’s all you can eat!
There’s a plethora of other serverless services, that can help you build an application.
Did I mention, serverless stuff scales really well?
Serverless Framework
How can all this serverless stuff be managed? For this I’m a fan of the serverless framework. The serverless framework can help us easily deploy our applications to any of the major cloud providers. It’s easy to integrate into a CI/CD pipeline if you choose, and they also offer a neat web portal you can use.
Isn’t this the same thing as the AWS Serverless Application Model (SAM)? The SAM method is similar to the serverless framework, but SAM is much more like CloudFormation. Meaning, you can run into CFT issues you might see normally. With the serverless framework, there’s fewer limitations and can be easily extended with plugins. Serverless framework really simplifies the deployment process; less steps required to do the same thing.
Let’s Build An App
Let’s take a look at this serverless thing in action. Install serverless framework and get a python venv going. npm install -g serverless
and python3 -m venv venv
can do this. You should have your AWS cli configured as well.
This is going to be a simple app that’ll take a post request with an API gateway, and send to our first Lambda handler. From there we send to an SQS queue, to be processed by another Lambda; and ultimately email with message with SES.
API GW -> Lambda1 -> SQS -> Lambda2 -> SES
The Code
For this, I’m going to be using Python for this; hence why we got a Python venv going above. I’m running Python 3.9 on my local machine. It’s worth noting the Lambda runtime we’ll use is Python 3.8, but this code will work. Note: All of the code can be found here to follow along with.
In sqs/handler.py
you can find the primary handler I’m using for this app. I’ve got four functions in here, but the following two at the most important; as they get kicked off from events. In this case an API gateway event and SQS queue event.
This first function is the handler to be called on a post event on the /messages
route. The function parses the request and looks for the message
payload. Something like {"message":"yo yo yo yo"}
. We return some http headers, and a response to let the client know things went well.
1 | def api_gw_post_message(event, context): |
api_gw_post_message
Calls another function I have that places the message into an SQS queue. When there’s a message in the queue, the following function processes that message. This function sends the message to the fourth function send_email_ses
, to send the message via SES.
Something to note with this configuration, having an SQS queue as a Lambda event source; means we don’t need to delete messages from the queue when we successfully process them. AWS handles this. If something bad does happen, the message is kept in the queue; and your Lambda try’s again.
1 | def sqs_queue_event_handler(event, context): |
Infrastructure as Code
So we’ve got some code, but how do we run this? Well, we’re going to run it on AWS; and let the serverless framework do the heavy lifting. In serverless.yml
(found here) we get to see how easy it is to get going.
In the first bit here we give the app a name, and input some important information.
provider name - Aws/Azure/Google? AWS for this
runtime - Python, node, go? What does you app run on?
stage - Prod, stage, dev? What stage to deploy
region - us-east-1, us-west-1? Where is this going to run?
stackTags - What do you want these resources tagged with?
environment - What environmental variables do you need?
logRetentionInDays - How long should logs live in Cloudwatch?
iamRoleStatements - Define a role policy, that the app will leverage
1 | service: python-sqs-srv |
Here’s the next important part. This is where we define the handler functions, to events. We can have a bunch more functions if we want, with all sorts of event triggers we could use.
1 | package: |
And finally, we create the SQS queue. One less thing to deal with in the console! You can see above how we plugin the resource created below Fn::GetAtt: [ testQueue, Arn ]
.
1 | resources: |
Something to note here, I am using AWS SES. To make this work, I had to verify my domain for sending, and an email to send to. Restrictions on who you can send to can be removed, if you request to get your account out of sandbox mode.
Deploying
We’ve got all of the ingredients ready, now it’s time to go in the oven. This is where the fun stuff happens. Let’s run the following command sls deploy --send_from noreply@mikemiller.tech --send_to foo@bar.com
Here’s the output we get from the command. You can also see this stack in the CloudFormation console.
1 | [mmiller@Mikes-MacBook-Pro-13 Lambda_SQS (master)]$ sls deploy --send_from noreply@mikemiller.tech --send_to foo@bar.com |
In Action
We’ve got our API gateway’s endpoint https://abcd1234.execute-api.us-west-2.amazonaws.com/dev/messages
. Let’s test this thing out with curl -
1 | [mmiller@Mikes-MacBook-Pro-13 Lambda_SQS (master)]$ curl -XPOST https://abcd1234.execute-api.us-west-2.amazonaws.com/dev/messages -d '{"message":"yo yo yo yo"}' |
In my email
1 | From: <noreply@mikemiller.tech> |
It worked! 😎
Testing
We’ve built and deployed our app, and it seems to work. But we really should write some tests for the code we’ve got. How can we mock AWS resources for tests? That’s what moto is for. Make sure you’ve installed the requirements.txt
file with pip. Here’s what our test file test_handler.py
looks like.
1 | import unittest |
Run the tests
1 | (venv) [mmiller@Mikes-MacBook-Pro-13 Lambda_SQS (master)]$ python -m pytest -v |
And we’re good with testing! They could probably be enhanced a bit, but this works for now.
Updates
Say the app is up and running, but a new feature needs to be implemented. It’s as easy as running your sls deploy
command again, and the stack is updated. If you setup some sort of CI/CD, this step could be automated for you.
Final Thoughts
Although the app shown here is pretty simple, it does showcase the power of serverless applications. Companies are moving towards serverless applications, that are a bunch of individual micro-services. The agility and maintainability makes it a clear choice for new cloud native apps. AWS and other providers are making serverless apps easier and easier; not to mention the serverless framework. Pairing serverless with a CI/CD solution makes maintenance and updates seriously painless.
The cost factor is a huge rational for serverless as well. Pay for what you need in most cases, and don’t worry about over/under provisioning. We like variable expense versus capital expense. Read more about AWS pricing here.
Give serverless a shot, it’s fun and costs next to nothing to get started.