To make the most of this tutorial, sign up for Serverless Framework’s dashboard account for free:
Here at ShoutOUT, we've been happily operating on Amazon Web Services for years. But there is always room for improvement. Luckily, we have the kind of team that doesn’t hesitate to embrace new technologies that will improve the platform—and that’s the very reason why we decided to try serverless architecture.
Before I move into explaining our own serverless architecture, I'd like to share some info on the services and infrastructure we previously had, and why we decided to switch to serverless.
Previously, we ran a couple of services built with Node.js on top of the Express Framework. This served our ShoutOUT dashboard and integrations with several third party services. These services were deployed on Docker containers inside Amazon's Elastic Container Service (ECS).
The reason for using Docker, was the level of support and overall convenience it provided for running microservices. We could self-contain the services and handle scaling individually. Additionally, Docker helped ensure that features and fixes would seamlessly ship between our development and production teams. In short, AWS + ECS = a solid platform from which to run and manage docker containers. Within these processes, services ran smoothly with high availability and resiliency.
However, when the number of services and complexity of each service increased, it became obvious that we needed a way to expand the computational capacity.
The interesting (or not so interesting) fact is that the traffic we get for these services is very unpredictable. For example, during the Holiday season, we get much more traffic than normal through the dashboard. If an integrated third party application starts sending more traffic for some reason, or if a customer runs a Facebook campaign integrated with their ShoutOUT account, we also experience traffic spikes. So there is no defined pattern for spikes and idle times.
But when it happens, the service should be available and capable of handling the increasing workload—especially for third party integrations in which data is being synchronized.
We could have scaled our ECS environment by adding more container instances and multiple service containers. And we did try that at first, but there was a hurdle. We were running a SaaS business, making cost a critical factor. This solution was not appealing.
The following diagram shows an overview of our previous deployment setup.
Around this time, we started to hear more and more serverless success stories that illustrated exactly what we had been looking for as a SaaS solutions provider.
Being able to remove all the scaling concerns was a big relief, especially when combined with the added advantage of paying only for what we use. So, without any further ado, we started migrating our services to the serverless stack.
Luckily, we had built our services with Node.js (which was was fully supported by AWS Lambda), making it a small effort to combine them with Lambda functions. By this time, the Serverless Framework was in a very early stage and we had no clue that it even existed.
Prior to integration with the Serverless Framework, we had to do a lot of manual configuration on API Gateway and Lambda; deploying multiple services this way was really painful. Thankfully, shortly after this, we found this awesome Framework! The Serverless Framework saved us a lot of time, and streamlined the development to deploy process. We fully embraced it and it solved a big pain for us.
The Serverless Framework has a great mechanism to provision and update resources required to run serverless apps. It's command line interface makes it super easy to deploy new versions to production, with the option to easily roll back if anything goes wrong. This eliminated all the manual configurations we had to do within our AWS account.
The added advantage is that we can deploy the same services to different regions or different AWS accounts via a single command in CLI.
Around 80% of the backend services we had were successfully migrated to a serverless stack, and we were able to reduce a considerable amount of cost this way. The following diagram shows an overview our current setup after successful integration with the Serverless Framework:
Apart from cost reduction, we were also able to gain a lot of other advantages through this migration.
Since integration, we've taken a serverless first approach; all new services are built in a serverless fashion unless there is an obvious reason not to go serverless. This has helped us dramatically shorten our release cycles, which, as a startup and a SaaS provider, has been hugely beneficial.
Finally, I would like to point out one other thing.
Cloud technologies and platforms, especially serverless architectures, evolve and improve at a very fast pace. As solutions providers, we need to have our systems prepared to embrace and cope with these new technologies. We should always adapt our systems to them, or we will end up with legacy systems that don’t reap the benefits discussed above.
I suggest you go back and look at how far you are lagging behind new technologies to see whether you can improve what you are doing to ensure you can keep up in an ever-changing industry.
Asanka is a Co-founder and the VP Technology at ShoutOUT Labs
guides-and-tutorialsoperations-and-observabilityengineering-culture - 21.03.18
The benefits, and the drawbacks, to serverless are real. When (and why) not to go serverless.
written by Andrea Passwater
engineering-culture - 30.08.18
The Serverless team wanted to get more active on social media, so we created a 10 week program. It was so successful, we decided to open source it!
written by Andrea Passwater