AWS re:Invent is in full swing, with AWS announcing a slew of new features. Most notably, we’re pretty excited about AWS Lambda's support for Layers.
Layers allows you to include additional files or data for your functions. This could be binaries such as FFmpeg or ImageMagick, or it could be difficult-to-package dependencies, such as NumPy for Python. These layers are added to your function’s zip file when published. In a way, they are comparable to EC2 AMIs, but for functions.
The killer feature of Lambda's Layers is that they can be shared between Lambda functions, accounts, and even publicly!
There are two aspects to using Lambda Layers:
- Publishing a layer that can be used by other functions
- Using a layer in your function when you publish a new function version.
We’re excited to say that the Serverless Framework has day 1 support for both publishing and using Lambda Layers with your functions with Version 1.34.0!
See how you can publish and use Lambda Layers with the Serverless Framework below.
Example use case: Creating GIFs with FFmpeg
For a walkthrough, let’s make a service that takes an uploaded video and converts it to a GIF.
We’ll use FFmpeg, a open source tool for manipulating video and audio. FFmpeg is a binary program and a great example use case for a layer as managing the binary falls outside the responsibility of your runtime’s packaging system.
In this example, we’ll build and publish a layer that contains FFmpeg. Then, we’ll create a Lambda function that uses the FFmpeg layer to convert videos to GIFs.
To get started, create a serverless project for your layer & service:
$ npm i -g serverless # Update to v1.34.0 or greater of serverless for layers support
$ sls create -t aws-nodejs -n gifmaker -p gifmaker
$ cd gifmaker
Then at the bottom of your serverless.yml
add the following to define your layer that will contain FFmpeg. The path
property is a path to a directory that will be zipped up and published as your layer:
layers:
ffmpeg:
path: layer
Run the following commands to download the contents of your layer:
$ mkdir layer
$ cd layer
$ curl -O https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz
$ tar xf ffmpeg-git-amd64-static.tar.xz
$ rm ffmpeg-git-amd64-static.tar.xz
$ mv ffmpeg-git-*-amd64-static ffmpeg
$ cd ..
You’re ready to test deployment of your layer. Deploy and you’ll see the layer’s ARN in the output info:
$ sls deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (6.24 KB)...
Serverless: Uploading service .zip file to S3 (49.82 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
..........
Serverless: Stack update finished...
Service Information
service: gifmaker
stage: dev
region: us-east-1
stack: ffmpeg-layer-dev
api keys:
None
endpoints:
None
functions:
hello: gifmaker-dev-hello
layers:
ffmpeg: arn:aws:lambda:us-east-1:111111111111:layer:ffmpeg:1
Next, we’ll add a custom
section to serverless.yml
to specify the S3 bucket name (choose your own unique bucket name):
custom:
bucket: MyGifMakerBucket
Now rename your function from hello
to mkgif
, specify that your function uses the layer you’re publishing, and add an S3 event configuration:
functions:
mkgif:
handler: handler.mkgif
events:
- s3: ${self:custom.bucket}
layers:
# Ref name is generated by TitleCasing the layer name & appending LambdaLayer
- {Ref: FfmpegLambdaLayer}
You’ll also need to give your service permission to read & write your S3 bucket, add the following in the provider
section of your serverless.yml
file:
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
Resource: "arn:aws:s3:::${self:custom.bucket}/*"
Your serverless.yml
should now look like this.
We need to make our handler. Replace the contents of handler.js
with the following code, which gets the file from S3, downloads it to disk, runs ffmpeg on it, reads the GIF, and finally puts it back to S3:
const { spawnSync } = require("child_process");
const { readFileSync, writeFileSync, unlinkSync } = require("fs");
const AWS = require("aws-sdk");
const s3 = new AWS.S3();
module.exports.mkgif = async (event, context) => {
if (!event.Records) {
console.log("not an s3 invocation!");
return;
}
for (const record of event.Records) {
if (!record.s3) {
console.log("not an s3 invocation!");
continue;
}
if (record.s3.object.key.endsWith(".gif")) {
console.log("already a gif");
continue;
}
// get the file
const s3Object = await s3
.getObject({
Bucket: record.s3.bucket.name,
Key: record.s3.object.key
})
.promise();
// write file to disk
writeFileSync(`/tmp/${record.s3.object.key}`, s3Object.Body);
// convert to gif!
spawnSync(
"/opt/ffmpeg/ffmpeg",
[
"-i",
`/tmp/${record.s3.object.key}`,
"-f",
"gif",
`/tmp/${record.s3.object.key}.gif`
],
{ stdio: "inherit" }
);
// read gif from disk
const gifFile = readFileSync(`/tmp/${record.s3.object.key}.gif`);
// delete the temp files
unlinkSync(`/tmp/${record.s3.object.key}.gif`);
unlinkSync(`/tmp/${record.s3.object.key}`);
// upload gif to s3
await s3
.putObject({
Bucket: record.s3.bucket.name,
Key: `${record.s3.object.key}.gif`,
Body: gifFile
})
.promise();
}
};
Now you can deploy both the layer & updated function with sls deploy
. Let’s test it out by uploading a video to our S3 bucket:
$ curl -OL https://archive.org/download/mov-bbb/mov_bbb.mp4
$ aws s3 cp mov_bbb.mp4 s3://YOURBUCKETNAME/mov_bbb.mp4
$ # wait a little bit….
$ aws s3 cp s3://YOURBUCKETNAME/mov_bbb.mp4.gif mov_bb.mp4.gif
You now have a GIF copy of the mp4 you uploaded!
For the full source of this example, check it out in our examples repo.
Some tips on working with layers
In the example above, instead of specifying an ARN for the layer that the
function is using, we used {Ref: FfmpegLambdaLayer}
. This is a
CloudFormation Reference.
The name is derived from your layer's name, e.g., ffmpeg
becomes FfmpegLambdaLayer
. If you're not
sure what your layer's name will be, you can find it by running sls package
then searching for
LambdaLayer
in .serverless/cloudformation-template-update-stack.json
.
You may have noticed that every time you deploy your stack, a new layer version is created. This is due to limitations with CloudFormation. The best way to deal with this is by keeping your layer and your function in separate stacks.
Let's try that with the example we just made.
First, create a new folder and move the layers directory into it:
$ cd ..
$ mkdir ffmpeg-layer
$ mv gifmaker/layer ffmpeg-layer/.
$ cd ffmpeg-layer
Remove the top-level layers
section in gifmaker/serverless.yml
, then create a new
serverless.yml
in the ffmpeg-layer
folder containing:
service: ffmpeg-layer
frameworkVersion: ">=1.34.0 <2.0.0"
provider:
name: aws
layers:
ffmpeg:
path: layer
resources:
Outputs:
FfmpegLayerExport:
Value:
Ref: FfmpegLambdaLayer
Export:
Name: FfmpegLambdaLayer
Now you can run sls deploy
to publish your layer!
Go back to the gifmaker
service directory and change {Ref: FfmpegLambdaLayer}
in the
serverless.yml
to ${cf:ffmpeg-layer-dev.FfmpegLayerExport}
. You can now run sls deploy
and
it'll use the layer from the other service. Note that the dev
in the variable above is the
stage
of your layer service.
More Examples
You can see the following projects for some examples of using this plugin to build a layer. They all leverage Docker and the docker-lambda images to compile for AWS’s Lambda environment on any operating system:
- geoip-lambda-layer - A layer containing MaxMind’s GeoIP libraries
- sqlite-lambda-layer - A layer to fix SQLite support in Python 3.6 runtimes
Awesome layers
Also check out this repository of awesome layers: https://github.com/mthenw/awesome-layers
Custom runtime support: even better!
Along with layers support, AWS also just announced support for building your own runtime using the Runtime API.
This allows you to build, use, and share runtime support for Lambda outside of what AWS officially supports.
Custom runtimes with the Serverless Framework
To utilize custom runtimes with Serverless, specify the runtime as provided
in your serverless.yml
and include a layer that provides a custom runtime. For documentation on building your own runtime, see AWS’s documentation here