Serverless: State of the Union

Posted on
AWS Builders Day Edinburgh Conference DevOps Serverless AWS GraphQL

This talk by Danilo Poccia gave a great introduction to the serverless landscape and the current state of affairs.

What is serverless?

Serverless is a term used to describe a way of running applications and functions where you are not concerned with how it scales and the infrastructure that underlies it. The key features of serverless are:

  • no server management
  • flexible scaling
  • high availability
  • no idle capacity

New Features & Improvements

There have a lot of new features introduced to the Lambda offering recently some of these are not generally available yet but the majority are.

Serverless Application Repository

This new feature which is currently being rolled out across the regions allows you to search through a repository of serverless applications written by AWS or other AWS users and run them just like you would a Lambda created by yourself. These ready made apps have their templates defined in SAM, the Cloud Formation extension that simplifies the automated creation of Lambda functions. These templates are easily customised and where necessary contain parameters that you can replace with your own values so you can connect your ready made app to your own S3 bucket for instance.

Applications that you add to the repository can either be public or private. If you share applications publicly then they will be audited by AWS to ensure that you are not doing anything you shouldn’t be doing. If you want to monetise your applications that is also possible through the AWS Marketplace.

Lambda Function Editor

The Lambda function editor has had several recent updates to make it more usable and create a better serverless experience. The editor is now a slimmed down version of the Cloud9 IDE that is available in AWS. This allows you to edit multiple files, create new files within existing packages and generally be more feature rich than the previous generation editor. You are also now able to run tests and view the results and any associated logs without leaving the Labmda editor. Saving tests is also possible so you can create tests and then re-run them at a later date.

Scaling Limits

By default you can scale Lambda functions to 1000 concurrent executions in each region. This limit should be high enough for the vast majority of users however AWS has now made it easier to increase that limit further. If you need to exceed 1000 concurrent executions and your account is in good standing i.e you pay your bills then you can get that limit increased to 3000 concurrent executions just by raising a limit increase request with AWS. Anything further will involve the usual conversations with your AWS account manager about why you need such scale.

Cold Start Optimisation

AWS have worked hard to optimise the cold start performance of Lambda functions. There is now faster performance with cold start optimisations resulting in up to 80% reductions in the start times for large functions.

Logging and Monitoring

The logging and monitoring within the Lambda console has been improved. You can now use the monitoring to drill into a specific time frame then click straight through to that point in time in the logs in CloudWatch.

Memory Limits

The previous memory limit of 1.5GB has now been doubled to 3GB. Once you are above 1.8GB you now have an additional CPU core as well.

AWS Cloud9 IDE

AWS have introduced a new service in the form of a cloud based IDE called Cloud9. This integrates with a variety of AWS services and also contains built in support for Github. Using Cloud9 you can write your code, you can then debug and test your serverless functions directly in the IDE with SAM local built in. Once ready you can commit it to GitHub or deploy it directly to Lambda. You can create CodePipelines that allow you to develop in Cloud9 and when you push changes the pipeline will be triggered.

Safe Deployments

Lambda now has the ability to help you carry out safe deployments in the form of Lambda Weighted Aliases which allow you to set a percentage of traffic to be split between 2 versions of the same function. Allowing you to switch out a Lambda with a new version without rushing straight into 100% of traffic going to the new version.

API Gateway now supports stages where you can share traffic between two stages. You can use this to support deployments where you need to also change some of the resources used as different stages can have different resources. You can also run this over several days and not just minutes allowing you to do some performance monitoring of your deployment to ensure it is working as expected.

CodeDeploy now has automated support for safe serverless deployments. By utilising CloudWatch metrics CodeDeploy can do a safe rollout and rollback. There are 3 steps to this deployment mechanism:

  • Prevalidation
  • Traffic Shifting
  • Post Deployment Validation

If any CloudWatch alarm triggers during any of the deployment steps then a rollback is performed to ensure that you do not compromise your existing deployment. This means that you can monitor things like response time with CloudWatch and should it rise higher than a predetermined level during traffic shifting CodeDeploy would be able to automatically rollback the deployment. It is usually far better to use business metrics to determine suitable behaviour rather than relying solely on infrastructure metrics which don’t always show the big picture.

Concurrency Control

Concurrency control is a key element of managing functions in a serverless app. In some circumstances you may have a serverless function communicating with a legacy system and while the serverless function will have no problem scaling the same cannot be said for the legacy system. This is where the ability to throttle concurrency on a per function basis comes in really handy. You can monitor a functions concurrency by checking the CloudWatch metric for it. Other uses for concurrency control include:

  • protecting production serverless functions from other functions in the same account consuming all the concurrent executions
  • temporarily disable functions (function gone rogue during development etc)
  • develop functions with limited billing and runaway protection
  • when running in a VPC its a good way to limit the usage of IP addresses

With the introduction of VPC private links you can access data & services in your VPC. A private link creates an endpoint inside your VPC that will act as a collector for all traffic to pass into the VPC and then onto the services that you are trying to connect to.

Structured Logging For API’s

CloudTrail now has support for logging Lambda function invocations. This allows you to have a structured audit log of the invocations of any Lambda function.


What is GraphQL?

GraphQL is an open declarative data fetching specification. It is not a graph database. You can use it in conjunction with NoSQL and relational databases or with HTTP endpoints etc. GraphQL has 4 concepts:

  • schema - data structure
  • mutation - inserts, deletes etc
  • query - reads
  • subscriptions - real time notifications of every update

Why GraphQL?

GraphQL allows you to model data with an application schema. Your clients can request only the data that they need and this is all that will be returned. GraphQL has built in support for pagination and offline clients.

GraphQL on AWS - AWS AppSync

Currently in open preview, AWS AppSync allows you to use GraphQL easily on AWS. AppSync integrates with DynamoDB, Lambda, ElasticSearch and you can use Cognito for authentication. You can upload a schema or have a DynamoDB table autogenerate a schema.