To make the most of this tutorial series, create a Serverless Framework PRO account for free
Let's start by taking a closer look at the AWS Lambda service since this is the one that people think of the most when you talk about serverless. In short, AWS Lambda is a managed compute service that will auto scale and execute your code when they specific pre-configured event triggers that execution.
But how does that look in practice? Let's take a look at the small example of code. All it does is log some data out and return true. It is wrapped in a function that has two parameters, event and context.
As our definition said, AWS Lambda will execute my code when an event triggers that execution, which event that is entirely depends on the configuration. Previously we triggered our Lambda using an HTTP event from API gateway. Lambda functions can have a whole host of possible events that trigger their execution. Putting objects into three buckets, schedule events that just happen every however many minutes, hours or days. Inserting or updating data into a DynamoDB table and a lot lot more and this event driven nature of serverless applications is core to understanding how we can build the solutions we want.
This also means that the event parameter passed to our lambda function will be unique depending on the event type, S3 looks different to API Gateway which looks different to SNS for example.
The context parameter you see is not always required and is only needed if you need additional information about the environment your Lambda function is executing in, such as the performance values selected amongst others.
Within our function, we can do whatever it is we need with the data passed to us by the event objects such as write or read from a database. If this was also triggered by API Gateway, we could return data, format for it specifically for an HTTP response just as we did in our previous example.
However, not all events require a response and most just need to know if the lambda was successful or failed with an error object. Some services have different retry characteristics depending on whether an error was returned or not.
So now we have this code, it's been uploaded to AWS Lambda and it's going to get executed. What happens when we have a lot of traffic? Well, let's walk through an example.
My Lambda is triggered for the first time since deploying it. AWS in the background will automatically retrieve my function code and the code runtime I choose to write it in. It will then create what is called a micro VM and this is similar in most ways to a regular virtual machine except it is designed to be much smaller and much faster to instantiate. As fast as it is, it still does take some time and this process is known as a cold start.
Now that my execution environment has been started, my function is run by passing it the event object for me to work with however while all of this is happening, another event arrived to trigger the same function. We already have one Lambda function still busy executing, so AWS again automatically and in the background begins another cold start process.
So while the second event is busy executing within the second micro VM, the original event in the first micro VM has now finished execution. However, it is not removed or destroyed. This micro VM remains available and active and just as well, because now we have a third event coming in, but instead of this trigger causing the start of another cold start procedure and the creation of a third micro VM, this time the already available yet ideal micro VM is used instead. This is called a hot start, so there's no additional time taken to get the micro VM created as before.
The function can begin executing pretty quickly. What this also means is that the third request will execute faster as there was less time spent getting everything set up. Well, AWS is always working to reduce the time it takes for a cold start,it will always be at least a little slower than a hot start.
However, our example has shown that Lambda functions automatically execute in parallel, and this limited example, we only had two in parallel at one point, but by default, AWS has configured Lambda to execute 1000 functions in parallel, and if you find that ends up not being enough, you can just ask them and they will increase this limit.
One other important characteristic to bear in mind is execution times, Lambda functions can only run for at maximum 15 minutes. If you need to run code for longer than that, then there are alternative options out there and there are ways to just restructure what you need to do. Now that we've taken this time to get to know AWS Lambda, let's move on to our next topic.
[coming soon] S3